Dec 13 04:52:59.042703 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 04:52:59.042741 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 04:52:59.042755 kernel: BIOS-provided physical RAM map: Dec 13 04:52:59.042771 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 04:52:59.042782 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 04:52:59.042792 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 04:52:59.042804 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 13 04:52:59.042815 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 13 04:52:59.042825 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 04:52:59.042836 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 04:52:59.042847 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 04:52:59.042857 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 04:52:59.042873 kernel: NX (Execute Disable) protection: active Dec 13 04:52:59.042884 kernel: APIC: Static calls initialized Dec 13 04:52:59.042897 kernel: SMBIOS 2.8 present. Dec 13 04:52:59.042909 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Dec 13 04:52:59.042921 kernel: Hypervisor detected: KVM Dec 13 04:52:59.042937 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 04:52:59.042949 kernel: kvm-clock: using sched offset of 4310555157 cycles Dec 13 04:52:59.042962 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 04:52:59.042974 kernel: tsc: Detected 2499.998 MHz processor Dec 13 04:52:59.042986 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 04:52:59.042998 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 04:52:59.043029 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 13 04:52:59.043044 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 04:52:59.043056 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 04:52:59.043074 kernel: Using GB pages for direct mapping Dec 13 04:52:59.043086 kernel: ACPI: Early table checksum verification disabled Dec 13 04:52:59.043098 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 13 04:52:59.043110 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 04:52:59.043122 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 04:52:59.043134 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 04:52:59.043145 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 13 04:52:59.043157 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 04:52:59.043169 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 04:52:59.043186 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 04:52:59.043198 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 04:52:59.043210 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 13 04:52:59.043221 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 13 04:52:59.043234 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 13 04:52:59.043252 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 13 04:52:59.043264 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 13 04:52:59.043281 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 13 04:52:59.043294 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 13 04:52:59.043306 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 04:52:59.043319 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 04:52:59.043331 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 04:52:59.043343 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Dec 13 04:52:59.043355 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 04:52:59.043372 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Dec 13 04:52:59.043384 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 04:52:59.043397 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Dec 13 04:52:59.043409 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 04:52:59.043421 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Dec 13 04:52:59.043433 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 04:52:59.043445 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Dec 13 04:52:59.043458 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 04:52:59.043470 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Dec 13 04:52:59.043482 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 04:52:59.043513 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Dec 13 04:52:59.043526 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 04:52:59.043538 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 04:52:59.043550 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 13 04:52:59.043563 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Dec 13 04:52:59.043576 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Dec 13 04:52:59.043588 kernel: Zone ranges: Dec 13 04:52:59.043601 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 04:52:59.043613 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 13 04:52:59.043630 kernel: Normal empty Dec 13 04:52:59.043643 kernel: Movable zone start for each node Dec 13 04:52:59.043655 kernel: Early memory node ranges Dec 13 04:52:59.043667 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 04:52:59.043680 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 13 04:52:59.043692 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 13 04:52:59.043704 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 04:52:59.043717 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 04:52:59.043729 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 13 04:52:59.043742 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 04:52:59.043759 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 04:52:59.043771 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 04:52:59.043784 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 04:52:59.043796 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 04:52:59.043823 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 04:52:59.043835 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 04:52:59.043848 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 04:52:59.043860 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 04:52:59.043872 kernel: TSC deadline timer available Dec 13 04:52:59.043890 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Dec 13 04:52:59.043903 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 04:52:59.043916 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 04:52:59.043928 kernel: Booting paravirtualized kernel on KVM Dec 13 04:52:59.043941 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 04:52:59.043953 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Dec 13 04:52:59.043966 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Dec 13 04:52:59.043978 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Dec 13 04:52:59.043990 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 04:52:59.044008 kernel: kvm-guest: PV spinlocks enabled Dec 13 04:52:59.044712 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 04:52:59.044728 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 04:52:59.044741 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 04:52:59.044754 kernel: random: crng init done Dec 13 04:52:59.044766 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 04:52:59.044779 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 04:52:59.044791 kernel: Fallback order for Node 0: 0 Dec 13 04:52:59.044811 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Dec 13 04:52:59.044824 kernel: Policy zone: DMA32 Dec 13 04:52:59.044836 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 04:52:59.044849 kernel: software IO TLB: area num 16. Dec 13 04:52:59.044861 kernel: Memory: 1901528K/2096616K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 194828K reserved, 0K cma-reserved) Dec 13 04:52:59.044874 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 04:52:59.044887 kernel: Kernel/User page tables isolation: enabled Dec 13 04:52:59.044899 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 04:52:59.044912 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 04:52:59.044929 kernel: Dynamic Preempt: voluntary Dec 13 04:52:59.044942 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 04:52:59.044955 kernel: rcu: RCU event tracing is enabled. Dec 13 04:52:59.044968 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 04:52:59.044981 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 04:52:59.045006 kernel: Rude variant of Tasks RCU enabled. Dec 13 04:52:59.045042 kernel: Tracing variant of Tasks RCU enabled. Dec 13 04:52:59.045056 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 04:52:59.045069 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 04:52:59.045082 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 13 04:52:59.045095 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 04:52:59.045108 kernel: Console: colour VGA+ 80x25 Dec 13 04:52:59.045127 kernel: printk: console [tty0] enabled Dec 13 04:52:59.045140 kernel: printk: console [ttyS0] enabled Dec 13 04:52:59.045153 kernel: ACPI: Core revision 20230628 Dec 13 04:52:59.045166 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 04:52:59.045179 kernel: x2apic enabled Dec 13 04:52:59.045197 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 04:52:59.045211 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 04:52:59.045224 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Dec 13 04:52:59.045237 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 04:52:59.045250 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 04:52:59.045263 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 04:52:59.045276 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 04:52:59.045289 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 04:52:59.045302 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 04:52:59.045320 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 04:52:59.045333 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 04:52:59.045346 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 04:52:59.045359 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 04:52:59.045372 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 04:52:59.045385 kernel: MMIO Stale Data: Unknown: No mitigations Dec 13 04:52:59.045398 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 13 04:52:59.045410 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 04:52:59.045424 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 04:52:59.045436 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 04:52:59.045449 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 04:52:59.045467 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 04:52:59.045481 kernel: Freeing SMP alternatives memory: 32K Dec 13 04:52:59.045506 kernel: pid_max: default: 32768 minimum: 301 Dec 13 04:52:59.045520 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 04:52:59.045533 kernel: landlock: Up and running. Dec 13 04:52:59.045546 kernel: SELinux: Initializing. Dec 13 04:52:59.045559 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 04:52:59.045572 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 04:52:59.045585 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 13 04:52:59.045598 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 04:52:59.045611 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 04:52:59.045630 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 04:52:59.045644 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 13 04:52:59.045657 kernel: signal: max sigframe size: 1776 Dec 13 04:52:59.045670 kernel: rcu: Hierarchical SRCU implementation. Dec 13 04:52:59.045683 kernel: rcu: Max phase no-delay instances is 400. Dec 13 04:52:59.045696 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 04:52:59.045709 kernel: smp: Bringing up secondary CPUs ... Dec 13 04:52:59.045722 kernel: smpboot: x86: Booting SMP configuration: Dec 13 04:52:59.045735 kernel: .... node #0, CPUs: #1 Dec 13 04:52:59.045753 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 04:52:59.045766 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 04:52:59.045779 kernel: smpboot: Max logical packages: 16 Dec 13 04:52:59.045792 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Dec 13 04:52:59.045805 kernel: devtmpfs: initialized Dec 13 04:52:59.045818 kernel: x86/mm: Memory block size: 128MB Dec 13 04:52:59.045831 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 04:52:59.045844 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 04:52:59.045857 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 04:52:59.045875 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 04:52:59.045888 kernel: audit: initializing netlink subsys (disabled) Dec 13 04:52:59.045901 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 04:52:59.045914 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 04:52:59.045927 kernel: audit: type=2000 audit(1734065577.132:1): state=initialized audit_enabled=0 res=1 Dec 13 04:52:59.045940 kernel: cpuidle: using governor menu Dec 13 04:52:59.045953 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 04:52:59.045966 kernel: dca service started, version 1.12.1 Dec 13 04:52:59.045979 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 04:52:59.045997 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 04:52:59.046031 kernel: PCI: Using configuration type 1 for base access Dec 13 04:52:59.046049 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 04:52:59.046062 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 04:52:59.046075 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 04:52:59.046088 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 04:52:59.046101 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 04:52:59.046114 kernel: ACPI: Added _OSI(Module Device) Dec 13 04:52:59.046137 kernel: ACPI: Added _OSI(Processor Device) Dec 13 04:52:59.046158 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 04:52:59.046171 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 04:52:59.046184 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 04:52:59.046197 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 04:52:59.046209 kernel: ACPI: Interpreter enabled Dec 13 04:52:59.046222 kernel: ACPI: PM: (supports S0 S5) Dec 13 04:52:59.046235 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 04:52:59.046249 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 04:52:59.046262 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 04:52:59.046280 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 04:52:59.046293 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 04:52:59.046559 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 04:52:59.046755 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 04:52:59.046934 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 04:52:59.046954 kernel: PCI host bridge to bus 0000:00 Dec 13 04:52:59.047147 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 04:52:59.047313 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 04:52:59.047468 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 04:52:59.047647 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 13 04:52:59.047807 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 04:52:59.047960 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 13 04:52:59.048144 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 04:52:59.048357 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 04:52:59.048577 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Dec 13 04:52:59.048759 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Dec 13 04:52:59.048929 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Dec 13 04:52:59.050521 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Dec 13 04:52:59.050720 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 04:52:59.050906 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 04:52:59.054074 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Dec 13 04:52:59.054343 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 04:52:59.054533 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Dec 13 04:52:59.054716 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 04:52:59.054900 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Dec 13 04:52:59.055684 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 04:52:59.055937 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Dec 13 04:52:59.056192 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 04:52:59.056379 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Dec 13 04:52:59.056622 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 04:52:59.056796 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Dec 13 04:52:59.056998 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 04:52:59.058194 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Dec 13 04:52:59.058425 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 04:52:59.058610 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Dec 13 04:52:59.058831 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 04:52:59.059003 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 04:52:59.061532 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Dec 13 04:52:59.061709 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 04:52:59.061901 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Dec 13 04:52:59.062123 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 04:52:59.062322 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 04:52:59.062515 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Dec 13 04:52:59.062705 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Dec 13 04:52:59.062889 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 04:52:59.063841 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 04:52:59.065116 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 04:52:59.065296 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Dec 13 04:52:59.065468 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Dec 13 04:52:59.065701 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 04:52:59.065873 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 04:52:59.066357 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Dec 13 04:52:59.066576 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Dec 13 04:52:59.066765 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 04:52:59.066945 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 04:52:59.069202 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 04:52:59.069449 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 04:52:59.069723 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Dec 13 04:52:59.069923 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Dec 13 04:52:59.071235 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 04:52:59.071419 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 04:52:59.071624 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 04:52:59.071807 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Dec 13 04:52:59.071990 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 04:52:59.075569 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 04:52:59.075766 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 04:52:59.075974 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 04:52:59.076219 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 04:52:59.076395 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 04:52:59.076582 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 04:52:59.076751 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 04:52:59.076939 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 04:52:59.077208 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 04:52:59.077386 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 04:52:59.077589 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 04:52:59.077755 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 04:52:59.077931 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 04:52:59.078133 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 04:52:59.078311 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 04:52:59.078506 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 04:52:59.078681 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 04:52:59.078871 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 04:52:59.080134 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 04:52:59.080357 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 04:52:59.080550 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 04:52:59.080729 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 04:52:59.080750 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 04:52:59.080765 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 04:52:59.080778 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 04:52:59.080817 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 04:52:59.080831 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 04:52:59.080859 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 04:52:59.080881 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 04:52:59.080900 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 04:52:59.080914 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 04:52:59.080927 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 04:52:59.080940 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 04:52:59.080953 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 04:52:59.080972 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 04:52:59.080986 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 04:52:59.080999 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 04:52:59.082041 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 04:52:59.082063 kernel: iommu: Default domain type: Translated Dec 13 04:52:59.082076 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 04:52:59.082089 kernel: PCI: Using ACPI for IRQ routing Dec 13 04:52:59.082102 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 04:52:59.082116 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 04:52:59.082136 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 13 04:52:59.082335 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 04:52:59.082538 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 04:52:59.082707 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 04:52:59.082727 kernel: vgaarb: loaded Dec 13 04:52:59.082741 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 04:52:59.082755 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 04:52:59.082768 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 04:52:59.082781 kernel: pnp: PnP ACPI init Dec 13 04:52:59.082969 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 04:52:59.082991 kernel: pnp: PnP ACPI: found 5 devices Dec 13 04:52:59.083005 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 04:52:59.087090 kernel: NET: Registered PF_INET protocol family Dec 13 04:52:59.087124 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 04:52:59.087139 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 04:52:59.087153 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 04:52:59.087166 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 04:52:59.087199 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 04:52:59.087213 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 04:52:59.087226 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 04:52:59.087245 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 04:52:59.087258 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 04:52:59.087272 kernel: NET: Registered PF_XDP protocol family Dec 13 04:52:59.087541 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Dec 13 04:52:59.087721 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 04:52:59.087923 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 04:52:59.088142 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 04:52:59.088318 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 04:52:59.088525 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 04:52:59.088722 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 04:52:59.088907 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 04:52:59.091524 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 04:52:59.091721 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 04:52:59.091895 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 04:52:59.092118 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 04:52:59.092290 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 04:52:59.092495 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 04:52:59.092678 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 04:52:59.092862 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Dec 13 04:52:59.095149 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 04:52:59.095366 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 04:52:59.095601 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 04:52:59.095775 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 13 04:52:59.095945 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 04:52:59.096134 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 04:52:59.096316 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 04:52:59.096596 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 13 04:52:59.096887 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 04:52:59.098247 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 04:52:59.098422 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 04:52:59.098613 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 13 04:52:59.098780 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 04:52:59.098960 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 04:52:59.100775 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 04:52:59.100954 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 13 04:52:59.103226 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 04:52:59.103412 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 04:52:59.103612 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 04:52:59.103780 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 13 04:52:59.103946 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 04:52:59.104178 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 04:52:59.104349 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 04:52:59.104538 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 13 04:52:59.104717 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 04:52:59.104885 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 04:52:59.105084 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 04:52:59.105269 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 13 04:52:59.105470 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 04:52:59.105662 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 04:52:59.105842 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 04:52:59.106066 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 13 04:52:59.106234 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 04:52:59.106417 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 04:52:59.106611 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 04:52:59.106763 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 04:52:59.106918 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 04:52:59.107111 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 13 04:52:59.107273 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 04:52:59.107432 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 13 04:52:59.107633 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 13 04:52:59.107797 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 13 04:52:59.107980 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 04:52:59.108230 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 04:52:59.108438 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Dec 13 04:52:59.108631 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 04:52:59.108798 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 04:52:59.108995 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Dec 13 04:52:59.109171 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 04:52:59.109348 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 04:52:59.109547 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 13 04:52:59.109749 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 04:52:59.109918 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 04:52:59.110131 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Dec 13 04:52:59.110295 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 04:52:59.110469 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 04:52:59.110659 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Dec 13 04:52:59.110843 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 04:52:59.111051 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 04:52:59.111242 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Dec 13 04:52:59.111399 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 04:52:59.111577 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 04:52:59.111746 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Dec 13 04:52:59.111907 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 04:52:59.112136 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 04:52:59.112159 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 04:52:59.112174 kernel: PCI: CLS 0 bytes, default 64 Dec 13 04:52:59.112188 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 04:52:59.112202 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Dec 13 04:52:59.112216 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 04:52:59.112231 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 04:52:59.112245 kernel: Initialise system trusted keyrings Dec 13 04:52:59.112259 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 04:52:59.112280 kernel: Key type asymmetric registered Dec 13 04:52:59.112294 kernel: Asymmetric key parser 'x509' registered Dec 13 04:52:59.112307 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 04:52:59.112321 kernel: io scheduler mq-deadline registered Dec 13 04:52:59.112342 kernel: io scheduler kyber registered Dec 13 04:52:59.112356 kernel: io scheduler bfq registered Dec 13 04:52:59.112542 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 04:52:59.112713 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 04:52:59.112886 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 04:52:59.113122 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 04:52:59.113290 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 04:52:59.113462 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 04:52:59.113662 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 04:52:59.113830 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 04:52:59.114046 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 04:52:59.114227 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 04:52:59.114393 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 04:52:59.114594 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 04:52:59.114804 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 04:52:59.115108 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 04:52:59.115277 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 04:52:59.115600 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 04:52:59.115823 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 04:52:59.116040 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 04:52:59.116222 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 04:52:59.116387 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 04:52:59.116569 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 04:52:59.116752 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 04:52:59.116942 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 04:52:59.117157 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 04:52:59.117179 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 04:52:59.117195 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 04:52:59.117209 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 04:52:59.117231 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 04:52:59.117246 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 04:52:59.117260 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 04:52:59.117274 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 04:52:59.117287 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 04:52:59.117301 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 04:52:59.117473 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 04:52:59.117658 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 04:52:59.117840 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T04:52:58 UTC (1734065578) Dec 13 04:52:59.118031 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 04:52:59.118054 kernel: intel_pstate: CPU model not supported Dec 13 04:52:59.118078 kernel: NET: Registered PF_INET6 protocol family Dec 13 04:52:59.118092 kernel: Segment Routing with IPv6 Dec 13 04:52:59.118106 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 04:52:59.118119 kernel: NET: Registered PF_PACKET protocol family Dec 13 04:52:59.118139 kernel: Key type dns_resolver registered Dec 13 04:52:59.118153 kernel: IPI shorthand broadcast: enabled Dec 13 04:52:59.118174 kernel: sched_clock: Marking stable (1351003965, 254100034)->(1741641872, -136537873) Dec 13 04:52:59.118188 kernel: registered taskstats version 1 Dec 13 04:52:59.118210 kernel: Loading compiled-in X.509 certificates Dec 13 04:52:59.118224 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 04:52:59.118238 kernel: Key type .fscrypt registered Dec 13 04:52:59.118251 kernel: Key type fscrypt-provisioning registered Dec 13 04:52:59.118265 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 04:52:59.118279 kernel: ima: Allocated hash algorithm: sha1 Dec 13 04:52:59.118298 kernel: ima: No architecture policies found Dec 13 04:52:59.118312 kernel: clk: Disabling unused clocks Dec 13 04:52:59.118325 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 04:52:59.118340 kernel: Write protecting the kernel read-only data: 36864k Dec 13 04:52:59.118353 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 04:52:59.118367 kernel: Run /init as init process Dec 13 04:52:59.118381 kernel: with arguments: Dec 13 04:52:59.118395 kernel: /init Dec 13 04:52:59.118408 kernel: with environment: Dec 13 04:52:59.118421 kernel: HOME=/ Dec 13 04:52:59.118440 kernel: TERM=linux Dec 13 04:52:59.118454 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 04:52:59.118471 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 04:52:59.118501 systemd[1]: Detected virtualization kvm. Dec 13 04:52:59.118517 systemd[1]: Detected architecture x86-64. Dec 13 04:52:59.118531 systemd[1]: Running in initrd. Dec 13 04:52:59.118546 systemd[1]: No hostname configured, using default hostname. Dec 13 04:52:59.118566 systemd[1]: Hostname set to . Dec 13 04:52:59.118582 systemd[1]: Initializing machine ID from VM UUID. Dec 13 04:52:59.118596 systemd[1]: Queued start job for default target initrd.target. Dec 13 04:52:59.118611 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 04:52:59.118626 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 04:52:59.118647 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 04:52:59.118662 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 04:52:59.118678 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 04:52:59.118698 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 04:52:59.118715 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 04:52:59.118730 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 04:52:59.118745 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 04:52:59.118764 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 04:52:59.118778 systemd[1]: Reached target paths.target - Path Units. Dec 13 04:52:59.118793 systemd[1]: Reached target slices.target - Slice Units. Dec 13 04:52:59.118813 systemd[1]: Reached target swap.target - Swaps. Dec 13 04:52:59.118828 systemd[1]: Reached target timers.target - Timer Units. Dec 13 04:52:59.118843 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 04:52:59.118858 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 04:52:59.118872 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 04:52:59.118887 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 04:52:59.118902 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 04:52:59.118917 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 04:52:59.118937 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 04:52:59.118952 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 04:52:59.118967 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 04:52:59.118982 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 04:52:59.119002 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 04:52:59.119058 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 04:52:59.119077 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 04:52:59.119092 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 04:52:59.119107 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 04:52:59.119186 systemd-journald[201]: Collecting audit messages is disabled. Dec 13 04:52:59.119221 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 04:52:59.119237 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 04:52:59.119251 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 04:52:59.119273 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 04:52:59.119288 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 04:52:59.119303 kernel: Bridge firewalling registered Dec 13 04:52:59.119318 systemd-journald[201]: Journal started Dec 13 04:52:59.119351 systemd-journald[201]: Runtime Journal (/run/log/journal/bdc4d1320649468b815d10038120052a) is 4.7M, max 38.0M, 33.2M free. Dec 13 04:52:59.054181 systemd-modules-load[202]: Inserted module 'overlay' Dec 13 04:52:59.174655 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 04:52:59.100046 systemd-modules-load[202]: Inserted module 'br_netfilter' Dec 13 04:52:59.176240 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 04:52:59.177410 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 04:52:59.179235 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 04:52:59.194343 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 04:52:59.196199 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 04:52:59.206240 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 04:52:59.209210 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 04:52:59.231890 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 04:52:59.234678 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 04:52:59.238602 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 04:52:59.243271 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 04:52:59.253351 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 04:52:59.258204 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 04:52:59.268937 dracut-cmdline[237]: dracut-dracut-053 Dec 13 04:52:59.273420 dracut-cmdline[237]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 04:52:59.310542 systemd-resolved[238]: Positive Trust Anchors: Dec 13 04:52:59.311793 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 04:52:59.311843 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 04:52:59.320271 systemd-resolved[238]: Defaulting to hostname 'linux'. Dec 13 04:52:59.322119 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 04:52:59.323328 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 04:52:59.392112 kernel: SCSI subsystem initialized Dec 13 04:52:59.405048 kernel: Loading iSCSI transport class v2.0-870. Dec 13 04:52:59.419047 kernel: iscsi: registered transport (tcp) Dec 13 04:52:59.445687 kernel: iscsi: registered transport (qla4xxx) Dec 13 04:52:59.445822 kernel: QLogic iSCSI HBA Driver Dec 13 04:52:59.501664 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 04:52:59.511490 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 04:52:59.543971 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 04:52:59.544054 kernel: device-mapper: uevent: version 1.0.3 Dec 13 04:52:59.547042 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 04:52:59.594067 kernel: raid6: sse2x4 gen() 7842 MB/s Dec 13 04:52:59.617702 kernel: raid6: sse2x2 gen() 5450 MB/s Dec 13 04:52:59.634886 kernel: raid6: sse2x1 gen() 10088 MB/s Dec 13 04:52:59.634999 kernel: raid6: using algorithm sse2x1 gen() 10088 MB/s Dec 13 04:52:59.653800 kernel: raid6: .... xor() 7269 MB/s, rmw enabled Dec 13 04:52:59.653880 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 04:52:59.685069 kernel: xor: automatically using best checksumming function avx Dec 13 04:52:59.880053 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 04:52:59.898383 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 04:52:59.905424 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 04:52:59.931344 systemd-udevd[421]: Using default interface naming scheme 'v255'. Dec 13 04:52:59.938532 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 04:52:59.948211 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 04:52:59.978773 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Dec 13 04:53:00.022841 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 04:53:00.029300 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 04:53:00.152854 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 04:53:00.162221 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 04:53:00.181916 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 04:53:00.190873 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 04:53:00.193854 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 04:53:00.194639 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 04:53:00.203268 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 04:53:00.232641 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 04:53:00.289049 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Dec 13 04:53:00.327391 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 04:53:00.327421 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 04:53:00.327639 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 04:53:00.327662 kernel: GPT:17805311 != 125829119 Dec 13 04:53:00.327693 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 04:53:00.327712 kernel: GPT:17805311 != 125829119 Dec 13 04:53:00.327729 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 04:53:00.327747 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 04:53:00.318787 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 04:53:00.345139 kernel: AVX version of gcm_enc/dec engaged. Dec 13 04:53:00.345189 kernel: AES CTR mode by8 optimization enabled Dec 13 04:53:00.318978 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 04:53:00.321653 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 04:53:00.322477 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 04:53:00.322666 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 04:53:00.344210 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 04:53:00.358679 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 04:53:00.366420 kernel: libata version 3.00 loaded. Dec 13 04:53:00.395321 kernel: ACPI: bus type USB registered Dec 13 04:53:00.396739 kernel: usbcore: registered new interface driver usbfs Dec 13 04:53:00.399221 kernel: usbcore: registered new interface driver hub Dec 13 04:53:00.402640 kernel: usbcore: registered new device driver usb Dec 13 04:53:00.407061 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (477) Dec 13 04:53:00.436044 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 04:53:00.451788 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 04:53:00.451831 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 04:53:00.452125 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 04:53:00.452365 kernel: scsi host0: ahci Dec 13 04:53:00.452596 kernel: scsi host1: ahci Dec 13 04:53:00.453264 kernel: scsi host2: ahci Dec 13 04:53:00.453478 kernel: scsi host3: ahci Dec 13 04:53:00.453676 kernel: scsi host4: ahci Dec 13 04:53:00.453869 kernel: scsi host5: ahci Dec 13 04:53:00.455274 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Dec 13 04:53:00.455297 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Dec 13 04:53:00.455316 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Dec 13 04:53:00.455334 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Dec 13 04:53:00.455364 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Dec 13 04:53:00.455383 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Dec 13 04:53:00.455401 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (465) Dec 13 04:53:00.447196 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 04:53:00.541731 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 04:53:00.548575 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 04:53:00.549456 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 04:53:00.563200 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 04:53:00.575575 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 04:53:00.591244 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 04:53:00.596194 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 04:53:00.598391 disk-uuid[561]: Primary Header is updated. Dec 13 04:53:00.598391 disk-uuid[561]: Secondary Entries is updated. Dec 13 04:53:00.598391 disk-uuid[561]: Secondary Header is updated. Dec 13 04:53:00.605116 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 04:53:00.615038 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 04:53:00.637945 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 04:53:00.759060 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 04:53:00.762328 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 04:53:00.762370 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 04:53:00.765222 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 04:53:00.766906 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 04:53:00.768689 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 04:53:00.783054 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 04:53:00.803680 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 13 04:53:00.803928 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 04:53:00.804170 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 04:53:00.804379 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 13 04:53:00.804617 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 13 04:53:00.804822 kernel: hub 1-0:1.0: USB hub found Dec 13 04:53:00.805284 kernel: hub 1-0:1.0: 4 ports detected Dec 13 04:53:00.806654 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 04:53:00.806954 kernel: hub 2-0:1.0: USB hub found Dec 13 04:53:00.807206 kernel: hub 2-0:1.0: 4 ports detected Dec 13 04:53:01.035104 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 04:53:01.177051 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 04:53:01.183674 kernel: usbcore: registered new interface driver usbhid Dec 13 04:53:01.183735 kernel: usbhid: USB HID core driver Dec 13 04:53:01.190062 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 13 04:53:01.194214 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 13 04:53:01.615157 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 04:53:01.617078 disk-uuid[562]: The operation has completed successfully. Dec 13 04:53:01.671813 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 04:53:01.671980 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 04:53:01.696274 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 04:53:01.714358 sh[581]: Success Dec 13 04:53:01.732065 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Dec 13 04:53:01.812779 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 04:53:01.814749 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 04:53:01.833319 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 04:53:01.857472 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 04:53:01.857608 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 04:53:01.857633 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 04:53:01.859349 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 04:53:01.861059 kernel: BTRFS info (device dm-0): using free space tree Dec 13 04:53:01.872945 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 04:53:01.874803 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 04:53:01.881379 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 04:53:01.885233 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 04:53:01.899084 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 04:53:01.899148 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 04:53:01.902052 kernel: BTRFS info (device vda6): using free space tree Dec 13 04:53:01.907035 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 04:53:01.919465 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 04:53:01.922973 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 04:53:01.928856 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 04:53:01.936260 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 04:53:02.064870 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 04:53:02.074244 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 04:53:02.085087 ignition[661]: Ignition 2.19.0 Dec 13 04:53:02.085120 ignition[661]: Stage: fetch-offline Dec 13 04:53:02.085224 ignition[661]: no configs at "/usr/lib/ignition/base.d" Dec 13 04:53:02.090379 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 04:53:02.085263 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:53:02.085455 ignition[661]: parsed url from cmdline: "" Dec 13 04:53:02.085463 ignition[661]: no config URL provided Dec 13 04:53:02.085473 ignition[661]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 04:53:02.085490 ignition[661]: no config at "/usr/lib/ignition/user.ign" Dec 13 04:53:02.085500 ignition[661]: failed to fetch config: resource requires networking Dec 13 04:53:02.085994 ignition[661]: Ignition finished successfully Dec 13 04:53:02.118879 systemd-networkd[767]: lo: Link UP Dec 13 04:53:02.118893 systemd-networkd[767]: lo: Gained carrier Dec 13 04:53:02.121483 systemd-networkd[767]: Enumeration completed Dec 13 04:53:02.122456 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 04:53:02.122459 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 04:53:02.122467 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 04:53:02.124686 systemd-networkd[767]: eth0: Link UP Dec 13 04:53:02.124692 systemd-networkd[767]: eth0: Gained carrier Dec 13 04:53:02.124703 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 04:53:02.125274 systemd[1]: Reached target network.target - Network. Dec 13 04:53:02.133233 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 04:53:02.153794 ignition[770]: Ignition 2.19.0 Dec 13 04:53:02.153812 ignition[770]: Stage: fetch Dec 13 04:53:02.154080 ignition[770]: no configs at "/usr/lib/ignition/base.d" Dec 13 04:53:02.154101 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:53:02.154283 ignition[770]: parsed url from cmdline: "" Dec 13 04:53:02.154290 ignition[770]: no config URL provided Dec 13 04:53:02.154300 ignition[770]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 04:53:02.159140 systemd-networkd[767]: eth0: DHCPv4 address 10.244.15.10/30, gateway 10.244.15.9 acquired from 10.244.15.9 Dec 13 04:53:02.154317 ignition[770]: no config at "/usr/lib/ignition/user.ign" Dec 13 04:53:02.154512 ignition[770]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 04:53:02.154568 ignition[770]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 04:53:02.156302 ignition[770]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 04:53:02.156711 ignition[770]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 13 04:53:02.357249 ignition[770]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Dec 13 04:53:02.372170 ignition[770]: GET result: OK Dec 13 04:53:02.372325 ignition[770]: parsing config with SHA512: 9795558fef0e78050bb50cefda61f9d32783bb2f10a62474052fb65a0dbf5fa020847c5dcd5eede1b48e4f17a9626589a9430296596443af84bc44ccdc1a344d Dec 13 04:53:02.377134 unknown[770]: fetched base config from "system" Dec 13 04:53:02.377152 unknown[770]: fetched base config from "system" Dec 13 04:53:02.377668 ignition[770]: fetch: fetch complete Dec 13 04:53:02.377165 unknown[770]: fetched user config from "openstack" Dec 13 04:53:02.377677 ignition[770]: fetch: fetch passed Dec 13 04:53:02.380405 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 04:53:02.377743 ignition[770]: Ignition finished successfully Dec 13 04:53:02.405224 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 04:53:02.425479 ignition[777]: Ignition 2.19.0 Dec 13 04:53:02.425502 ignition[777]: Stage: kargs Dec 13 04:53:02.425738 ignition[777]: no configs at "/usr/lib/ignition/base.d" Dec 13 04:53:02.425766 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:53:02.426759 ignition[777]: kargs: kargs passed Dec 13 04:53:02.429370 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 04:53:02.426833 ignition[777]: Ignition finished successfully Dec 13 04:53:02.435227 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 04:53:02.465686 ignition[783]: Ignition 2.19.0 Dec 13 04:53:02.466308 ignition[783]: Stage: disks Dec 13 04:53:02.466617 ignition[783]: no configs at "/usr/lib/ignition/base.d" Dec 13 04:53:02.466639 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:53:02.467575 ignition[783]: disks: disks passed Dec 13 04:53:02.469086 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 04:53:02.467646 ignition[783]: Ignition finished successfully Dec 13 04:53:02.471344 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 04:53:02.472463 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 04:53:02.473930 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 04:53:02.475584 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 04:53:02.477185 systemd[1]: Reached target basic.target - Basic System. Dec 13 04:53:02.485249 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 04:53:02.506924 systemd-fsck[791]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 04:53:02.511064 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 04:53:02.531343 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 04:53:02.653032 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 04:53:02.653513 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 04:53:02.655836 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 04:53:02.671188 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 04:53:02.674349 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 04:53:02.676522 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 04:53:02.685258 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 13 04:53:02.697240 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (799) Dec 13 04:53:02.706365 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 04:53:02.706442 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 04:53:02.706466 kernel: BTRFS info (device vda6): using free space tree Dec 13 04:53:02.695983 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 04:53:02.696073 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 04:53:02.699694 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 04:53:02.718893 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 04:53:02.710377 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 04:53:02.720826 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 04:53:02.785381 initrd-setup-root[828]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 04:53:02.793352 initrd-setup-root[835]: cut: /sysroot/etc/group: No such file or directory Dec 13 04:53:02.801852 initrd-setup-root[842]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 04:53:02.812710 initrd-setup-root[849]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 04:53:02.920864 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 04:53:02.926259 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 04:53:02.930227 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 04:53:02.944686 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 04:53:02.947865 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 04:53:02.971312 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 04:53:02.981068 ignition[917]: INFO : Ignition 2.19.0 Dec 13 04:53:02.982368 ignition[917]: INFO : Stage: mount Dec 13 04:53:02.982368 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 04:53:02.982368 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:53:02.984997 ignition[917]: INFO : mount: mount passed Dec 13 04:53:02.984997 ignition[917]: INFO : Ignition finished successfully Dec 13 04:53:02.984482 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 04:53:03.927413 systemd-networkd[767]: eth0: Gained IPv6LL Dec 13 04:53:05.436890 systemd-networkd[767]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:3c2:24:19ff:fef4:f0a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:3c2:24:19ff:fef4:f0a/64 assigned by NDisc. Dec 13 04:53:05.436905 systemd-networkd[767]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 04:53:09.861112 coreos-metadata[801]: Dec 13 04:53:09.860 WARN failed to locate config-drive, using the metadata service API instead Dec 13 04:53:09.885576 coreos-metadata[801]: Dec 13 04:53:09.885 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 04:53:09.899244 coreos-metadata[801]: Dec 13 04:53:09.899 INFO Fetch successful Dec 13 04:53:09.900187 coreos-metadata[801]: Dec 13 04:53:09.899 INFO wrote hostname srv-6eb17.gb1.brightbox.com to /sysroot/etc/hostname Dec 13 04:53:09.902966 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 04:53:09.903212 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 13 04:53:09.921160 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 04:53:09.930652 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 04:53:09.949046 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (934) Dec 13 04:53:09.957052 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 04:53:09.957097 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 04:53:09.957136 kernel: BTRFS info (device vda6): using free space tree Dec 13 04:53:09.971046 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 04:53:09.974906 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 04:53:10.009142 ignition[952]: INFO : Ignition 2.19.0 Dec 13 04:53:10.009142 ignition[952]: INFO : Stage: files Dec 13 04:53:10.011170 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 04:53:10.011170 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:53:10.011170 ignition[952]: DEBUG : files: compiled without relabeling support, skipping Dec 13 04:53:10.014275 ignition[952]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 04:53:10.014275 ignition[952]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 04:53:10.016784 ignition[952]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 04:53:10.016784 ignition[952]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 04:53:10.019244 ignition[952]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 04:53:10.019058 unknown[952]: wrote ssh authorized keys file for user: core Dec 13 04:53:10.022212 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 04:53:10.022212 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 04:53:10.024815 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 04:53:10.024815 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 04:53:10.024815 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 04:53:10.024815 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 04:53:10.024815 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 04:53:10.024815 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 04:53:10.024815 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 04:53:10.024815 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 04:53:10.673517 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Dec 13 04:53:13.324811 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 04:53:13.327094 ignition[952]: INFO : files: op(8): [started] processing unit "containerd.service" Dec 13 04:53:13.327094 ignition[952]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 04:53:13.329674 ignition[952]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 04:53:13.329674 ignition[952]: INFO : files: op(8): [finished] processing unit "containerd.service" Dec 13 04:53:13.329674 ignition[952]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 04:53:13.329674 ignition[952]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 04:53:13.329674 ignition[952]: INFO : files: files passed Dec 13 04:53:13.329674 ignition[952]: INFO : Ignition finished successfully Dec 13 04:53:13.331149 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 04:53:13.346328 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 04:53:13.351706 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 04:53:13.353166 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 04:53:13.353335 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 04:53:13.385494 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 04:53:13.385494 initrd-setup-root-after-ignition[980]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 04:53:13.389099 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 04:53:13.390420 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 04:53:13.391948 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 04:53:13.399319 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 04:53:13.441370 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 04:53:13.441550 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 04:53:13.443520 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 04:53:13.444997 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 04:53:13.446745 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 04:53:13.451204 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 04:53:13.474375 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 04:53:13.479222 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 04:53:13.506244 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 04:53:13.508320 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 04:53:13.509377 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 04:53:13.510876 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 04:53:13.511085 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 04:53:13.512987 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 04:53:13.513990 systemd[1]: Stopped target basic.target - Basic System. Dec 13 04:53:13.515604 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 04:53:13.517092 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 04:53:13.518535 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 04:53:13.520144 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 04:53:13.521703 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 04:53:13.523372 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 04:53:13.524907 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 04:53:13.526585 systemd[1]: Stopped target swap.target - Swaps. Dec 13 04:53:13.527991 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 04:53:13.528234 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 04:53:13.530003 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 04:53:13.531082 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 04:53:13.532564 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 04:53:13.532946 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 04:53:13.534277 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 04:53:13.534451 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 04:53:13.536527 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 04:53:13.536715 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 04:53:13.538450 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 04:53:13.538606 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 04:53:13.546367 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 04:53:13.547194 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 04:53:13.547466 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 04:53:13.552316 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 04:53:13.553908 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 04:53:13.554196 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 04:53:13.556754 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 04:53:13.558244 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 04:53:13.567077 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 04:53:13.568119 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 04:53:13.588265 ignition[1004]: INFO : Ignition 2.19.0 Dec 13 04:53:13.588265 ignition[1004]: INFO : Stage: umount Dec 13 04:53:13.590206 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 04:53:13.590206 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:53:13.593636 ignition[1004]: INFO : umount: umount passed Dec 13 04:53:13.595241 ignition[1004]: INFO : Ignition finished successfully Dec 13 04:53:13.597330 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 04:53:13.598976 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 04:53:13.599149 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 04:53:13.600360 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 04:53:13.600489 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 04:53:13.602741 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 04:53:13.602888 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 04:53:13.610758 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 04:53:13.610878 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 04:53:13.612208 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 04:53:13.612306 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 04:53:13.613637 systemd[1]: Stopped target network.target - Network. Dec 13 04:53:13.615038 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 04:53:13.615132 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 04:53:13.616624 systemd[1]: Stopped target paths.target - Path Units. Dec 13 04:53:13.618051 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 04:53:13.618497 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 04:53:13.619670 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 04:53:13.620368 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 04:53:13.621865 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 04:53:13.621952 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 04:53:13.623374 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 04:53:13.623445 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 04:53:13.624958 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 04:53:13.625076 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 04:53:13.626675 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 04:53:13.626746 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 04:53:13.628233 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 04:53:13.628340 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 04:53:13.630134 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 04:53:13.632743 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 04:53:13.634626 systemd-networkd[767]: eth0: DHCPv6 lease lost Dec 13 04:53:13.638281 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 04:53:13.638490 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 04:53:13.639774 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 04:53:13.639848 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 04:53:13.645139 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 04:53:13.646900 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 04:53:13.646988 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 04:53:13.656207 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 04:53:13.657748 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 04:53:13.657931 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 04:53:13.668408 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 04:53:13.668573 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 04:53:13.671180 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 04:53:13.671315 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 04:53:13.672657 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 04:53:13.672729 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 04:53:13.675059 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 04:53:13.675305 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 04:53:13.676808 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 04:53:13.676949 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 04:53:13.679347 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 04:53:13.679428 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 04:53:13.680823 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 04:53:13.680884 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 04:53:13.682695 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 04:53:13.682769 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 04:53:13.684967 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 04:53:13.685058 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 04:53:13.686561 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 04:53:13.686630 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 04:53:13.694206 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 04:53:13.696124 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 04:53:13.696201 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 04:53:13.700239 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 04:53:13.700308 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 04:53:13.707797 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 04:53:13.707958 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 04:53:13.709943 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 04:53:13.717272 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 04:53:13.728026 systemd[1]: Switching root. Dec 13 04:53:13.769044 systemd-journald[201]: Journal stopped Dec 13 04:53:15.198486 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Dec 13 04:53:15.198605 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 04:53:15.198646 kernel: SELinux: policy capability open_perms=1 Dec 13 04:53:15.198667 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 04:53:15.198693 kernel: SELinux: policy capability always_check_network=0 Dec 13 04:53:15.198734 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 04:53:15.198755 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 04:53:15.198786 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 04:53:15.198819 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 04:53:15.198855 kernel: audit: type=1403 audit(1734065594.047:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 04:53:15.198878 systemd[1]: Successfully loaded SELinux policy in 53.472ms. Dec 13 04:53:15.198911 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.921ms. Dec 13 04:53:15.198941 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 04:53:15.198965 systemd[1]: Detected virtualization kvm. Dec 13 04:53:15.198986 systemd[1]: Detected architecture x86-64. Dec 13 04:53:15.199049 systemd[1]: Detected first boot. Dec 13 04:53:15.199093 systemd[1]: Hostname set to . Dec 13 04:53:15.199123 systemd[1]: Initializing machine ID from VM UUID. Dec 13 04:53:15.199150 zram_generator::config[1067]: No configuration found. Dec 13 04:53:15.199174 systemd[1]: Populated /etc with preset unit settings. Dec 13 04:53:15.199208 systemd[1]: Queued start job for default target multi-user.target. Dec 13 04:53:15.199239 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 04:53:15.199269 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 04:53:15.199292 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 04:53:15.199325 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 04:53:15.199348 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 04:53:15.199377 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 04:53:15.199408 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 04:53:15.199432 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 04:53:15.199460 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 04:53:15.199488 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 04:53:15.199511 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 04:53:15.199539 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 04:53:15.199574 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 04:53:15.199603 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 04:53:15.199627 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 04:53:15.199648 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 04:53:15.199669 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 04:53:15.199697 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 04:53:15.199723 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 04:53:15.199752 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 04:53:15.199788 systemd[1]: Reached target slices.target - Slice Units. Dec 13 04:53:15.199817 systemd[1]: Reached target swap.target - Swaps. Dec 13 04:53:15.199841 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 04:53:15.199861 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 04:53:15.199891 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 04:53:15.199914 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 04:53:15.199934 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 04:53:15.199955 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 04:53:15.199988 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 04:53:15.200075 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 04:53:15.200102 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 04:53:15.200133 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 04:53:15.200162 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 04:53:15.200970 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:53:15.201007 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 04:53:15.201081 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 04:53:15.201107 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 04:53:15.201155 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 04:53:15.201179 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 04:53:15.201227 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 04:53:15.201280 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 04:53:15.201304 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 04:53:15.201340 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 04:53:15.201370 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 04:53:15.201393 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 04:53:15.201414 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 04:53:15.201436 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 04:53:15.201457 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 04:53:15.201484 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Dec 13 04:53:15.201517 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 04:53:15.201539 kernel: fuse: init (API version 7.39) Dec 13 04:53:15.201572 kernel: loop: module loaded Dec 13 04:53:15.201600 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 04:53:15.201623 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 04:53:15.201680 systemd-journald[1174]: Collecting audit messages is disabled. Dec 13 04:53:15.201739 systemd-journald[1174]: Journal started Dec 13 04:53:15.201774 systemd-journald[1174]: Runtime Journal (/run/log/journal/bdc4d1320649468b815d10038120052a) is 4.7M, max 38.0M, 33.2M free. Dec 13 04:53:15.208070 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 04:53:15.221500 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 04:53:15.247231 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:53:15.254081 kernel: ACPI: bus type drm_connector registered Dec 13 04:53:15.254140 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 04:53:15.254968 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 04:53:15.256384 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 04:53:15.257376 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 04:53:15.258303 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 04:53:15.259241 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 04:53:15.260165 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 04:53:15.261319 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 04:53:15.262639 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 04:53:15.264185 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 04:53:15.264469 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 04:53:15.265833 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 04:53:15.266181 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 04:53:15.267534 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 04:53:15.267783 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 04:53:15.268964 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 04:53:15.269444 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 04:53:15.270797 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 04:53:15.271037 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 04:53:15.272546 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 04:53:15.275340 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 04:53:15.277711 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 04:53:15.281211 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 04:53:15.282836 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 04:53:15.297061 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 04:53:15.304167 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 04:53:15.317348 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 04:53:15.318302 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 04:53:15.333219 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 04:53:15.353280 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 04:53:15.354229 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 04:53:15.359679 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 04:53:15.363570 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 04:53:15.372244 systemd-journald[1174]: Time spent on flushing to /var/log/journal/bdc4d1320649468b815d10038120052a is 71.596ms for 1108 entries. Dec 13 04:53:15.372244 systemd-journald[1174]: System Journal (/var/log/journal/bdc4d1320649468b815d10038120052a) is 8.0M, max 584.8M, 576.8M free. Dec 13 04:53:15.467232 systemd-journald[1174]: Received client request to flush runtime journal. Dec 13 04:53:15.376234 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 04:53:15.389282 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 04:53:15.394497 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 04:53:15.408540 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 04:53:15.433686 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 04:53:15.434680 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 04:53:15.474666 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 04:53:15.478850 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 04:53:15.492105 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 04:53:15.505334 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 04:53:15.520588 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Dec 13 04:53:15.520618 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Dec 13 04:53:15.520702 udevadm[1234]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 04:53:15.535404 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 04:53:15.544336 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 04:53:15.585939 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 04:53:15.601297 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 04:53:15.626049 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Dec 13 04:53:15.626080 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Dec 13 04:53:15.635870 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 04:53:16.170894 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 04:53:16.182307 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 04:53:16.226283 systemd-udevd[1247]: Using default interface naming scheme 'v255'. Dec 13 04:53:16.254389 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 04:53:16.268243 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 04:53:16.301414 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 04:53:16.369940 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 04:53:16.371749 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Dec 13 04:53:16.381035 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1259) Dec 13 04:53:16.385189 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1259) Dec 13 04:53:16.431059 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1252) Dec 13 04:53:16.499749 systemd-networkd[1253]: lo: Link UP Dec 13 04:53:16.499762 systemd-networkd[1253]: lo: Gained carrier Dec 13 04:53:16.508126 systemd-networkd[1253]: Enumeration completed Dec 13 04:53:16.511677 systemd-networkd[1253]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 04:53:16.511700 systemd-networkd[1253]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 04:53:16.513366 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 04:53:16.517502 systemd-networkd[1253]: eth0: Link UP Dec 13 04:53:16.517515 systemd-networkd[1253]: eth0: Gained carrier Dec 13 04:53:16.517534 systemd-networkd[1253]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 04:53:16.533227 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 04:53:16.540107 systemd-networkd[1253]: eth0: DHCPv4 address 10.244.15.10/30, gateway 10.244.15.9 acquired from 10.244.15.9 Dec 13 04:53:16.544332 systemd-networkd[1253]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 04:53:16.576323 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 04:53:16.583049 kernel: ACPI: button: Power Button [PWRF] Dec 13 04:53:16.594039 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 04:53:16.613714 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 04:53:16.651050 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 04:53:16.651133 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 04:53:16.658940 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 04:53:16.659253 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 04:53:16.707362 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 04:53:16.889830 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 04:53:16.903852 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 04:53:16.914323 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 04:53:16.930306 lvm[1288]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 04:53:16.963589 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 04:53:16.965695 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 04:53:16.974320 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 04:53:16.980359 lvm[1291]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 04:53:17.014455 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 04:53:17.016184 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 04:53:17.017124 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 04:53:17.017325 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 04:53:17.018330 systemd[1]: Reached target machines.target - Containers. Dec 13 04:53:17.020921 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 04:53:17.027187 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 04:53:17.030285 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 04:53:17.033144 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 04:53:17.036352 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 04:53:17.043238 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 04:53:17.051235 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 04:53:17.056251 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 04:53:17.070473 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 04:53:17.083777 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 04:53:17.086771 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 04:53:17.096045 kernel: loop0: detected capacity change from 0 to 142488 Dec 13 04:53:17.130319 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 04:53:17.151057 kernel: loop1: detected capacity change from 0 to 8 Dec 13 04:53:17.173230 kernel: loop2: detected capacity change from 0 to 140768 Dec 13 04:53:17.219049 kernel: loop3: detected capacity change from 0 to 211296 Dec 13 04:53:17.265088 kernel: loop4: detected capacity change from 0 to 142488 Dec 13 04:53:17.293480 kernel: loop5: detected capacity change from 0 to 8 Dec 13 04:53:17.296043 kernel: loop6: detected capacity change from 0 to 140768 Dec 13 04:53:17.328065 kernel: loop7: detected capacity change from 0 to 211296 Dec 13 04:53:17.350472 (sd-merge)[1312]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Dec 13 04:53:17.352930 (sd-merge)[1312]: Merged extensions into '/usr'. Dec 13 04:53:17.358889 systemd[1]: Reloading requested from client PID 1299 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 04:53:17.359083 systemd[1]: Reloading... Dec 13 04:53:17.464069 zram_generator::config[1343]: No configuration found. Dec 13 04:53:17.695106 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 04:53:17.721048 ldconfig[1295]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 04:53:17.781040 systemd[1]: Reloading finished in 421 ms. Dec 13 04:53:17.800054 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 04:53:17.801491 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 04:53:17.815479 systemd[1]: Starting ensure-sysext.service... Dec 13 04:53:17.820213 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 04:53:17.828046 systemd[1]: Reloading requested from client PID 1404 ('systemctl') (unit ensure-sysext.service)... Dec 13 04:53:17.828080 systemd[1]: Reloading... Dec 13 04:53:17.871458 systemd-tmpfiles[1405]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 04:53:17.872183 systemd-tmpfiles[1405]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 04:53:17.874534 systemd-tmpfiles[1405]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 04:53:17.875190 systemd-tmpfiles[1405]: ACLs are not supported, ignoring. Dec 13 04:53:17.875428 systemd-tmpfiles[1405]: ACLs are not supported, ignoring. Dec 13 04:53:17.880770 systemd-networkd[1253]: eth0: Gained IPv6LL Dec 13 04:53:17.884284 systemd-tmpfiles[1405]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 04:53:17.884449 systemd-tmpfiles[1405]: Skipping /boot Dec 13 04:53:17.902895 systemd-tmpfiles[1405]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 04:53:17.903119 systemd-tmpfiles[1405]: Skipping /boot Dec 13 04:53:17.928042 zram_generator::config[1434]: No configuration found. Dec 13 04:53:18.137706 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 04:53:18.222672 systemd[1]: Reloading finished in 394 ms. Dec 13 04:53:18.241682 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 04:53:18.256985 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 04:53:18.264593 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 04:53:18.275340 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 04:53:18.279218 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 04:53:18.293890 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 04:53:18.301313 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 04:53:18.316445 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:53:18.316747 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 04:53:18.327271 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 04:53:18.339405 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 04:53:18.359704 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 04:53:18.360737 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 04:53:18.360926 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:53:18.364645 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 04:53:18.372371 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 04:53:18.372651 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 04:53:18.377416 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 04:53:18.378189 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 04:53:18.380946 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 04:53:18.381420 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 04:53:18.388905 augenrules[1528]: No rules Dec 13 04:53:18.390721 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 04:53:18.404067 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:53:18.404450 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 04:53:18.418454 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 04:53:18.424100 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 04:53:18.434437 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 04:53:18.437338 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 04:53:18.448340 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 04:53:18.449201 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:53:18.458773 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 04:53:18.461290 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 04:53:18.463980 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 04:53:18.464302 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 04:53:18.465801 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 04:53:18.466385 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 04:53:18.468247 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 04:53:18.470241 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 04:53:18.481782 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 04:53:18.486974 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:53:18.488075 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 04:53:18.495438 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 04:53:18.500398 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 04:53:18.513507 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 04:53:18.520399 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 04:53:18.521344 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 04:53:18.521527 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 04:53:18.521666 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:53:18.531707 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 04:53:18.531989 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 04:53:18.533604 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 04:53:18.533839 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 04:53:18.536231 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 04:53:18.536461 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 04:53:18.545624 systemd[1]: Finished ensure-sysext.service. Dec 13 04:53:18.550529 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 04:53:18.550890 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 04:53:18.555437 systemd-resolved[1504]: Positive Trust Anchors: Dec 13 04:53:18.557069 systemd-resolved[1504]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 04:53:18.557239 systemd-resolved[1504]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 04:53:18.559675 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 04:53:18.559800 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 04:53:18.564661 systemd-resolved[1504]: Using system hostname 'srv-6eb17.gb1.brightbox.com'. Dec 13 04:53:18.569249 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 04:53:18.572307 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 04:53:18.573257 systemd[1]: Reached target network.target - Network. Dec 13 04:53:18.573956 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 04:53:18.574710 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 04:53:18.654336 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 04:53:18.655468 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 04:53:18.656346 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 04:53:18.657259 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 04:53:18.658158 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 04:53:18.658975 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 04:53:18.659049 systemd[1]: Reached target paths.target - Path Units. Dec 13 04:53:18.659838 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 04:53:18.660806 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 04:53:18.661765 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 04:53:18.662624 systemd[1]: Reached target timers.target - Timer Units. Dec 13 04:53:18.664702 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 04:53:18.667613 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 04:53:18.671074 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 04:53:18.672430 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 04:53:18.673272 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 04:53:18.674033 systemd[1]: Reached target basic.target - Basic System. Dec 13 04:53:18.675001 systemd[1]: System is tainted: cgroupsv1 Dec 13 04:53:18.675083 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 04:53:18.675123 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 04:53:18.679179 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 04:53:18.683143 systemd-networkd[1253]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:3c2:24:19ff:fef4:f0a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:3c2:24:19ff:fef4:f0a/64 assigned by NDisc. Dec 13 04:53:18.683284 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 04:53:18.683402 systemd-networkd[1253]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 04:53:18.692275 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 04:53:18.700155 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 04:53:18.715431 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 04:53:18.720175 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 04:53:18.734699 dbus-daemon[1577]: [system] SELinux support is enabled Dec 13 04:53:18.740309 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 04:53:18.742838 dbus-daemon[1577]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1253 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 04:53:18.747252 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 04:53:18.755697 jq[1579]: false Dec 13 04:53:18.756951 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 04:53:18.775264 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 04:53:18.780587 extend-filesystems[1580]: Found loop4 Dec 13 04:53:18.784943 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 04:53:18.790151 extend-filesystems[1580]: Found loop5 Dec 13 04:53:18.790151 extend-filesystems[1580]: Found loop6 Dec 13 04:53:18.790151 extend-filesystems[1580]: Found loop7 Dec 13 04:53:18.790151 extend-filesystems[1580]: Found vda Dec 13 04:53:18.790151 extend-filesystems[1580]: Found vda1 Dec 13 04:53:18.790151 extend-filesystems[1580]: Found vda2 Dec 13 04:53:18.790151 extend-filesystems[1580]: Found vda3 Dec 13 04:53:18.790151 extend-filesystems[1580]: Found usr Dec 13 04:53:18.790151 extend-filesystems[1580]: Found vda4 Dec 13 04:53:18.790151 extend-filesystems[1580]: Found vda6 Dec 13 04:53:18.811251 extend-filesystems[1580]: Found vda7 Dec 13 04:53:18.811251 extend-filesystems[1580]: Found vda9 Dec 13 04:53:18.811251 extend-filesystems[1580]: Checking size of /dev/vda9 Dec 13 04:53:18.802631 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 04:53:18.805933 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 04:53:18.820689 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 04:53:18.842169 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 04:53:18.845733 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 04:53:18.851174 extend-filesystems[1580]: Resized partition /dev/vda9 Dec 13 04:53:18.859546 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 04:53:18.859961 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 04:53:18.864104 extend-filesystems[1614]: resize2fs 1.47.1 (20-May-2024) Dec 13 04:53:18.882113 jq[1606]: true Dec 13 04:53:18.887455 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Dec 13 04:53:18.869670 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 04:53:18.887620 update_engine[1602]: I20241213 04:53:18.881117 1602 main.cc:92] Flatcar Update Engine starting Dec 13 04:53:18.870054 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 04:53:18.888902 update_engine[1602]: I20241213 04:53:18.888835 1602 update_check_scheduler.cc:74] Next update check in 5m50s Dec 13 04:53:18.891510 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 04:53:18.892003 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 04:53:18.899086 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 04:53:18.913502 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1248) Dec 13 04:53:18.924206 (ntainerd)[1621]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 04:53:18.936812 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 04:53:18.936865 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 04:53:18.937821 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 04:53:18.937853 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 04:53:18.940832 systemd[1]: Started update-engine.service - Update Engine. Dec 13 04:53:18.943289 jq[1619]: true Dec 13 04:53:18.940536 dbus-daemon[1577]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 04:53:18.971909 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 04:53:18.973541 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 04:53:18.991608 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 04:53:19.111411 systemd-timesyncd[1571]: Contacted time server 131.111.8.61:123 (0.flatcar.pool.ntp.org). Dec 13 04:53:19.111510 systemd-timesyncd[1571]: Initial clock synchronization to Fri 2024-12-13 04:53:19.368420 UTC. Dec 13 04:53:19.124538 systemd-logind[1596]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 04:53:19.124593 systemd-logind[1596]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 04:53:19.126204 systemd-logind[1596]: New seat seat0. Dec 13 04:53:19.141211 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 04:53:19.239540 bash[1649]: Updated "/home/core/.ssh/authorized_keys" Dec 13 04:53:19.242201 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 04:53:19.260038 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 04:53:19.261421 systemd[1]: Starting sshkeys.service... Dec 13 04:53:19.280296 extend-filesystems[1614]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 04:53:19.280296 extend-filesystems[1614]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 04:53:19.280296 extend-filesystems[1614]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 04:53:19.300474 extend-filesystems[1580]: Resized filesystem in /dev/vda9 Dec 13 04:53:19.284503 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 04:53:19.284871 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 04:53:19.313415 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 04:53:19.324167 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 04:53:19.397565 locksmithd[1634]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 04:53:19.417454 dbus-daemon[1577]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 04:53:19.417738 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 04:53:19.418640 dbus-daemon[1577]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1633 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 04:53:19.428439 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 04:53:19.478827 polkitd[1668]: Started polkitd version 121 Dec 13 04:53:19.505339 polkitd[1668]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 04:53:19.505454 polkitd[1668]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 04:53:19.508623 polkitd[1668]: Finished loading, compiling and executing 2 rules Dec 13 04:53:19.516244 dbus-daemon[1577]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 04:53:19.516519 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 04:53:19.517561 polkitd[1668]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 04:53:19.551147 systemd-hostnamed[1633]: Hostname set to (static) Dec 13 04:53:19.576993 containerd[1621]: time="2024-12-13T04:53:19.576847147Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 04:53:19.646798 containerd[1621]: time="2024-12-13T04:53:19.646727925Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 04:53:19.654037 containerd[1621]: time="2024-12-13T04:53:19.652421464Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 04:53:19.654037 containerd[1621]: time="2024-12-13T04:53:19.652471693Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 04:53:19.654037 containerd[1621]: time="2024-12-13T04:53:19.652498262Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 04:53:19.654037 containerd[1621]: time="2024-12-13T04:53:19.652766266Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 04:53:19.654037 containerd[1621]: time="2024-12-13T04:53:19.652798070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 04:53:19.654037 containerd[1621]: time="2024-12-13T04:53:19.652906243Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 04:53:19.654037 containerd[1621]: time="2024-12-13T04:53:19.652929283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 04:53:19.654340 containerd[1621]: time="2024-12-13T04:53:19.654151801Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 04:53:19.654340 containerd[1621]: time="2024-12-13T04:53:19.654179039Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 04:53:19.655211 containerd[1621]: time="2024-12-13T04:53:19.654201999Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 04:53:19.655211 containerd[1621]: time="2024-12-13T04:53:19.655211576Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 04:53:19.655392 containerd[1621]: time="2024-12-13T04:53:19.655365120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 04:53:19.655750 containerd[1621]: time="2024-12-13T04:53:19.655722616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 04:53:19.655946 containerd[1621]: time="2024-12-13T04:53:19.655917441Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 04:53:19.656001 containerd[1621]: time="2024-12-13T04:53:19.655949145Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 04:53:19.657083 containerd[1621]: time="2024-12-13T04:53:19.657055015Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 04:53:19.657201 containerd[1621]: time="2024-12-13T04:53:19.657175720Z" level=info msg="metadata content store policy set" policy=shared Dec 13 04:53:19.663833 containerd[1621]: time="2024-12-13T04:53:19.663795716Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 04:53:19.663923 containerd[1621]: time="2024-12-13T04:53:19.663875391Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 04:53:19.663923 containerd[1621]: time="2024-12-13T04:53:19.663906149Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 04:53:19.663923 containerd[1621]: time="2024-12-13T04:53:19.663931427Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 04:53:19.664180 containerd[1621]: time="2024-12-13T04:53:19.663953180Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 04:53:19.664180 containerd[1621]: time="2024-12-13T04:53:19.664160129Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 04:53:19.665035 containerd[1621]: time="2024-12-13T04:53:19.664603925Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 04:53:19.665035 containerd[1621]: time="2024-12-13T04:53:19.664781053Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 04:53:19.665035 containerd[1621]: time="2024-12-13T04:53:19.664807663Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 04:53:19.665035 containerd[1621]: time="2024-12-13T04:53:19.664841210Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 04:53:19.665035 containerd[1621]: time="2024-12-13T04:53:19.664866347Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 04:53:19.665035 containerd[1621]: time="2024-12-13T04:53:19.664886509Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 04:53:19.665035 containerd[1621]: time="2024-12-13T04:53:19.664905166Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 04:53:19.665035 containerd[1621]: time="2024-12-13T04:53:19.664925547Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 04:53:19.665035 containerd[1621]: time="2024-12-13T04:53:19.664946217Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 04:53:19.665035 containerd[1621]: time="2024-12-13T04:53:19.664965328Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 04:53:19.665035 containerd[1621]: time="2024-12-13T04:53:19.664983427Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 04:53:19.665035 containerd[1621]: time="2024-12-13T04:53:19.665008667Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 04:53:19.665482 containerd[1621]: time="2024-12-13T04:53:19.665074100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 04:53:19.665482 containerd[1621]: time="2024-12-13T04:53:19.665099117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 04:53:19.665482 containerd[1621]: time="2024-12-13T04:53:19.665129185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 04:53:19.665482 containerd[1621]: time="2024-12-13T04:53:19.665151170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 04:53:19.665482 containerd[1621]: time="2024-12-13T04:53:19.665169683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 04:53:19.665482 containerd[1621]: time="2024-12-13T04:53:19.665231408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 04:53:19.665482 containerd[1621]: time="2024-12-13T04:53:19.665266910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 04:53:19.665482 containerd[1621]: time="2024-12-13T04:53:19.665289196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 04:53:19.665482 containerd[1621]: time="2024-12-13T04:53:19.665309980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 04:53:19.665482 containerd[1621]: time="2024-12-13T04:53:19.665333674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 04:53:19.665482 containerd[1621]: time="2024-12-13T04:53:19.665357092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 04:53:19.665482 containerd[1621]: time="2024-12-13T04:53:19.665377051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 04:53:19.665482 containerd[1621]: time="2024-12-13T04:53:19.665396795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 04:53:19.665482 containerd[1621]: time="2024-12-13T04:53:19.665419768Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 04:53:19.665482 containerd[1621]: time="2024-12-13T04:53:19.665462768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 04:53:19.666053 containerd[1621]: time="2024-12-13T04:53:19.665485284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 04:53:19.666053 containerd[1621]: time="2024-12-13T04:53:19.665502926Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 04:53:19.666053 containerd[1621]: time="2024-12-13T04:53:19.665571046Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 04:53:19.666053 containerd[1621]: time="2024-12-13T04:53:19.665598227Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 04:53:19.666053 containerd[1621]: time="2024-12-13T04:53:19.665615526Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 04:53:19.666053 containerd[1621]: time="2024-12-13T04:53:19.665633808Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 04:53:19.666053 containerd[1621]: time="2024-12-13T04:53:19.665649930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 04:53:19.666053 containerd[1621]: time="2024-12-13T04:53:19.665675853Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 04:53:19.666053 containerd[1621]: time="2024-12-13T04:53:19.665705011Z" level=info msg="NRI interface is disabled by configuration." Dec 13 04:53:19.666053 containerd[1621]: time="2024-12-13T04:53:19.665723570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 04:53:19.666438 containerd[1621]: time="2024-12-13T04:53:19.666127285Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 04:53:19.666438 containerd[1621]: time="2024-12-13T04:53:19.666210538Z" level=info msg="Connect containerd service" Dec 13 04:53:19.666438 containerd[1621]: time="2024-12-13T04:53:19.666271822Z" level=info msg="using legacy CRI server" Dec 13 04:53:19.666438 containerd[1621]: time="2024-12-13T04:53:19.666288185Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 04:53:19.666438 containerd[1621]: time="2024-12-13T04:53:19.666431126Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 04:53:19.671124 containerd[1621]: time="2024-12-13T04:53:19.667286586Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 04:53:19.671124 containerd[1621]: time="2024-12-13T04:53:19.667861315Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 04:53:19.671124 containerd[1621]: time="2024-12-13T04:53:19.667976497Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 04:53:19.671124 containerd[1621]: time="2024-12-13T04:53:19.668238178Z" level=info msg="Start subscribing containerd event" Dec 13 04:53:19.671124 containerd[1621]: time="2024-12-13T04:53:19.668310417Z" level=info msg="Start recovering state" Dec 13 04:53:19.671124 containerd[1621]: time="2024-12-13T04:53:19.668443419Z" level=info msg="Start event monitor" Dec 13 04:53:19.671124 containerd[1621]: time="2024-12-13T04:53:19.668489526Z" level=info msg="Start snapshots syncer" Dec 13 04:53:19.671124 containerd[1621]: time="2024-12-13T04:53:19.668509875Z" level=info msg="Start cni network conf syncer for default" Dec 13 04:53:19.671124 containerd[1621]: time="2024-12-13T04:53:19.668521970Z" level=info msg="Start streaming server" Dec 13 04:53:19.671124 containerd[1621]: time="2024-12-13T04:53:19.670232688Z" level=info msg="containerd successfully booted in 0.098475s" Dec 13 04:53:19.668778 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 04:53:20.182867 sshd_keygen[1617]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 04:53:20.224123 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 04:53:20.239056 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 04:53:20.250508 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 04:53:20.250951 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 04:53:20.259465 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 04:53:20.263258 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 04:53:20.271891 (kubelet)[1704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 04:53:20.285257 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 04:53:20.299321 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 04:53:20.310729 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 04:53:20.312289 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 04:53:21.088510 kubelet[1704]: E1213 04:53:21.088360 1704 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 04:53:21.091202 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 04:53:21.091930 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 04:53:23.683466 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 04:53:23.696604 systemd[1]: Started sshd@0-10.244.15.10:22-147.75.109.163:52738.service - OpenSSH per-connection server daemon (147.75.109.163:52738). Dec 13 04:53:24.608412 sshd[1722]: Accepted publickey for core from 147.75.109.163 port 52738 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:53:24.611526 sshd[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:53:24.629121 systemd-logind[1596]: New session 1 of user core. Dec 13 04:53:24.630384 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 04:53:24.641526 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 04:53:24.676482 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 04:53:24.689871 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 04:53:24.699158 (systemd)[1728]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:53:24.844779 systemd[1728]: Queued start job for default target default.target. Dec 13 04:53:24.845841 systemd[1728]: Created slice app.slice - User Application Slice. Dec 13 04:53:24.845881 systemd[1728]: Reached target paths.target - Paths. Dec 13 04:53:24.845904 systemd[1728]: Reached target timers.target - Timers. Dec 13 04:53:24.860238 systemd[1728]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 04:53:24.870053 systemd[1728]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 04:53:24.870146 systemd[1728]: Reached target sockets.target - Sockets. Dec 13 04:53:24.870171 systemd[1728]: Reached target basic.target - Basic System. Dec 13 04:53:24.870236 systemd[1728]: Reached target default.target - Main User Target. Dec 13 04:53:24.870290 systemd[1728]: Startup finished in 161ms. Dec 13 04:53:24.870451 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 04:53:24.885285 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 04:53:25.355301 login[1709]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 04:53:25.363271 login[1708]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 04:53:25.365964 systemd-logind[1596]: New session 2 of user core. Dec 13 04:53:25.374742 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 04:53:25.381257 systemd-logind[1596]: New session 3 of user core. Dec 13 04:53:25.382106 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 04:53:25.534568 systemd[1]: Started sshd@1-10.244.15.10:22-147.75.109.163:52748.service - OpenSSH per-connection server daemon (147.75.109.163:52748). Dec 13 04:53:25.790754 coreos-metadata[1576]: Dec 13 04:53:25.790 WARN failed to locate config-drive, using the metadata service API instead Dec 13 04:53:25.818283 coreos-metadata[1576]: Dec 13 04:53:25.818 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 13 04:53:26.436924 sshd[1768]: Accepted publickey for core from 147.75.109.163 port 52748 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:53:26.438434 sshd[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:53:26.445141 systemd-logind[1596]: New session 4 of user core. Dec 13 04:53:26.450341 coreos-metadata[1660]: Dec 13 04:53:26.450 WARN failed to locate config-drive, using the metadata service API instead Dec 13 04:53:26.453544 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 04:53:26.474689 coreos-metadata[1660]: Dec 13 04:53:26.474 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 04:53:26.501187 coreos-metadata[1660]: Dec 13 04:53:26.501 INFO Fetch successful Dec 13 04:53:26.501404 coreos-metadata[1660]: Dec 13 04:53:26.501 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 04:53:26.538220 coreos-metadata[1660]: Dec 13 04:53:26.538 INFO Fetch successful Dec 13 04:53:26.553338 unknown[1660]: wrote ssh authorized keys file for user: core Dec 13 04:53:26.578665 update-ssh-keys[1778]: Updated "/home/core/.ssh/authorized_keys" Dec 13 04:53:26.581297 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 04:53:26.586519 systemd[1]: Finished sshkeys.service. Dec 13 04:53:26.845276 coreos-metadata[1576]: Dec 13 04:53:26.844 INFO Fetch failed with 404: resource not found Dec 13 04:53:26.845276 coreos-metadata[1576]: Dec 13 04:53:26.844 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 04:53:26.845905 coreos-metadata[1576]: Dec 13 04:53:26.845 INFO Fetch successful Dec 13 04:53:26.845905 coreos-metadata[1576]: Dec 13 04:53:26.845 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 13 04:53:26.858102 coreos-metadata[1576]: Dec 13 04:53:26.858 INFO Fetch successful Dec 13 04:53:26.858277 coreos-metadata[1576]: Dec 13 04:53:26.858 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 13 04:53:26.876730 coreos-metadata[1576]: Dec 13 04:53:26.876 INFO Fetch successful Dec 13 04:53:26.876850 coreos-metadata[1576]: Dec 13 04:53:26.876 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 13 04:53:26.891160 coreos-metadata[1576]: Dec 13 04:53:26.891 INFO Fetch successful Dec 13 04:53:26.891279 coreos-metadata[1576]: Dec 13 04:53:26.891 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 13 04:53:26.908299 coreos-metadata[1576]: Dec 13 04:53:26.908 INFO Fetch successful Dec 13 04:53:26.941936 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 04:53:26.943589 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 04:53:26.944218 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 04:53:26.944998 systemd[1]: Startup finished in 16.819s (kernel) + 12.949s (userspace) = 29.768s. Dec 13 04:53:27.067391 sshd[1768]: pam_unix(sshd:session): session closed for user core Dec 13 04:53:27.071688 systemd[1]: sshd@1-10.244.15.10:22-147.75.109.163:52748.service: Deactivated successfully. Dec 13 04:53:27.072283 systemd-logind[1596]: Session 4 logged out. Waiting for processes to exit. Dec 13 04:53:27.076376 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 04:53:27.078267 systemd-logind[1596]: Removed session 4. Dec 13 04:53:27.225500 systemd[1]: Started sshd@2-10.244.15.10:22-147.75.109.163:40048.service - OpenSSH per-connection server daemon (147.75.109.163:40048). Dec 13 04:53:28.120229 sshd[1796]: Accepted publickey for core from 147.75.109.163 port 40048 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:53:28.122242 sshd[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:53:28.129122 systemd-logind[1596]: New session 5 of user core. Dec 13 04:53:28.134455 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 04:53:28.748393 sshd[1796]: pam_unix(sshd:session): session closed for user core Dec 13 04:53:28.752964 systemd[1]: sshd@2-10.244.15.10:22-147.75.109.163:40048.service: Deactivated successfully. Dec 13 04:53:28.756434 systemd-logind[1596]: Session 5 logged out. Waiting for processes to exit. Dec 13 04:53:28.756608 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 04:53:28.759258 systemd-logind[1596]: Removed session 5. Dec 13 04:53:31.341867 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 04:53:31.360271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 04:53:31.510226 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 04:53:31.518558 (kubelet)[1816]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 04:53:31.636924 kubelet[1816]: E1213 04:53:31.636714 1816 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 04:53:31.643288 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 04:53:31.643626 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 04:53:38.963365 systemd[1]: Started sshd@3-10.244.15.10:22-147.75.109.163:43688.service - OpenSSH per-connection server daemon (147.75.109.163:43688). Dec 13 04:53:39.858861 sshd[1825]: Accepted publickey for core from 147.75.109.163 port 43688 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:53:39.860828 sshd[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:53:39.868273 systemd-logind[1596]: New session 6 of user core. Dec 13 04:53:39.874723 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 04:53:40.480543 sshd[1825]: pam_unix(sshd:session): session closed for user core Dec 13 04:53:40.485239 systemd-logind[1596]: Session 6 logged out. Waiting for processes to exit. Dec 13 04:53:40.485792 systemd[1]: sshd@3-10.244.15.10:22-147.75.109.163:43688.service: Deactivated successfully. Dec 13 04:53:40.489591 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 04:53:40.491118 systemd-logind[1596]: Removed session 6. Dec 13 04:53:40.630390 systemd[1]: Started sshd@4-10.244.15.10:22-147.75.109.163:43692.service - OpenSSH per-connection server daemon (147.75.109.163:43692). Dec 13 04:53:41.529409 sshd[1833]: Accepted publickey for core from 147.75.109.163 port 43692 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:53:41.531669 sshd[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:53:41.539720 systemd-logind[1596]: New session 7 of user core. Dec 13 04:53:41.550619 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 04:53:41.696940 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 04:53:41.705354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 04:53:41.868294 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 04:53:41.869235 (kubelet)[1849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 04:53:41.973544 kubelet[1849]: E1213 04:53:41.973449 1849 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 04:53:41.976225 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 04:53:41.976687 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 04:53:42.145334 sshd[1833]: pam_unix(sshd:session): session closed for user core Dec 13 04:53:42.150631 systemd-logind[1596]: Session 7 logged out. Waiting for processes to exit. Dec 13 04:53:42.151811 systemd[1]: sshd@4-10.244.15.10:22-147.75.109.163:43692.service: Deactivated successfully. Dec 13 04:53:42.154552 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 04:53:42.156261 systemd-logind[1596]: Removed session 7. Dec 13 04:53:42.306231 systemd[1]: Started sshd@5-10.244.15.10:22-147.75.109.163:43696.service - OpenSSH per-connection server daemon (147.75.109.163:43696). Dec 13 04:53:43.196052 sshd[1862]: Accepted publickey for core from 147.75.109.163 port 43696 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:53:43.198192 sshd[1862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:53:43.204658 systemd-logind[1596]: New session 8 of user core. Dec 13 04:53:43.211523 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 04:53:43.817257 sshd[1862]: pam_unix(sshd:session): session closed for user core Dec 13 04:53:43.822898 systemd[1]: sshd@5-10.244.15.10:22-147.75.109.163:43696.service: Deactivated successfully. Dec 13 04:53:43.826828 systemd-logind[1596]: Session 8 logged out. Waiting for processes to exit. Dec 13 04:53:43.827701 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 04:53:43.829296 systemd-logind[1596]: Removed session 8. Dec 13 04:53:43.966424 systemd[1]: Started sshd@6-10.244.15.10:22-147.75.109.163:43706.service - OpenSSH per-connection server daemon (147.75.109.163:43706). Dec 13 04:53:44.848199 sshd[1870]: Accepted publickey for core from 147.75.109.163 port 43706 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:53:44.850148 sshd[1870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:53:44.857085 systemd-logind[1596]: New session 9 of user core. Dec 13 04:53:44.866465 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 04:53:45.367518 sudo[1874]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 04:53:45.367993 sudo[1874]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 04:53:45.383368 sudo[1874]: pam_unix(sudo:session): session closed for user root Dec 13 04:53:45.527561 sshd[1870]: pam_unix(sshd:session): session closed for user core Dec 13 04:53:45.532133 systemd[1]: sshd@6-10.244.15.10:22-147.75.109.163:43706.service: Deactivated successfully. Dec 13 04:53:45.535594 systemd-logind[1596]: Session 9 logged out. Waiting for processes to exit. Dec 13 04:53:45.536733 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 04:53:45.539167 systemd-logind[1596]: Removed session 9. Dec 13 04:53:45.677402 systemd[1]: Started sshd@7-10.244.15.10:22-147.75.109.163:43720.service - OpenSSH per-connection server daemon (147.75.109.163:43720). Dec 13 04:53:46.567840 sshd[1879]: Accepted publickey for core from 147.75.109.163 port 43720 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:53:46.569902 sshd[1879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:53:46.576949 systemd-logind[1596]: New session 10 of user core. Dec 13 04:53:46.586454 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 04:53:47.043341 sudo[1884]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 04:53:47.043820 sudo[1884]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 04:53:47.049270 sudo[1884]: pam_unix(sudo:session): session closed for user root Dec 13 04:53:47.057574 sudo[1883]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 04:53:47.058101 sudo[1883]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 04:53:47.085413 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 04:53:47.087756 auditctl[1887]: No rules Dec 13 04:53:47.088608 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 04:53:47.088936 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 04:53:47.093845 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 04:53:47.144942 augenrules[1906]: No rules Dec 13 04:53:47.146938 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 04:53:47.148377 sudo[1883]: pam_unix(sudo:session): session closed for user root Dec 13 04:53:47.293832 sshd[1879]: pam_unix(sshd:session): session closed for user core Dec 13 04:53:47.298474 systemd[1]: sshd@7-10.244.15.10:22-147.75.109.163:43720.service: Deactivated successfully. Dec 13 04:53:47.300082 systemd-logind[1596]: Session 10 logged out. Waiting for processes to exit. Dec 13 04:53:47.303686 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 04:53:47.305684 systemd-logind[1596]: Removed session 10. Dec 13 04:53:47.448675 systemd[1]: Started sshd@8-10.244.15.10:22-147.75.109.163:53958.service - OpenSSH per-connection server daemon (147.75.109.163:53958). Dec 13 04:53:48.335087 sshd[1915]: Accepted publickey for core from 147.75.109.163 port 53958 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:53:48.336984 sshd[1915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:53:48.344641 systemd-logind[1596]: New session 11 of user core. Dec 13 04:53:48.351467 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 04:53:48.813753 sudo[1919]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 04:53:48.814527 sudo[1919]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 04:53:49.583218 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 04:53:49.664684 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 04:53:49.674708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 04:53:49.710360 systemd[1]: Reloading requested from client PID 1962 ('systemctl') (unit session-11.scope)... Dec 13 04:53:49.710412 systemd[1]: Reloading... Dec 13 04:53:49.858598 zram_generator::config[2002]: No configuration found. Dec 13 04:53:50.048194 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 04:53:50.148182 systemd[1]: Reloading finished in 437 ms. Dec 13 04:53:50.232648 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 04:53:50.235221 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 04:53:50.235767 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 04:53:50.242629 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 04:53:50.402292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 04:53:50.403366 (kubelet)[2082]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 04:53:50.481241 kubelet[2082]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 04:53:50.481241 kubelet[2082]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 04:53:50.481241 kubelet[2082]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 04:53:50.482643 kubelet[2082]: I1213 04:53:50.482565 2082 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 04:53:51.085847 kubelet[2082]: I1213 04:53:51.085752 2082 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 04:53:51.085847 kubelet[2082]: I1213 04:53:51.085824 2082 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 04:53:51.089037 kubelet[2082]: I1213 04:53:51.087859 2082 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 04:53:51.111431 kubelet[2082]: I1213 04:53:51.110932 2082 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 04:53:51.127956 kubelet[2082]: I1213 04:53:51.127867 2082 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 04:53:51.128694 kubelet[2082]: I1213 04:53:51.128667 2082 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 04:53:51.128982 kubelet[2082]: I1213 04:53:51.128926 2082 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 04:53:51.129837 kubelet[2082]: I1213 04:53:51.128992 2082 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 04:53:51.129837 kubelet[2082]: I1213 04:53:51.129034 2082 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 04:53:51.129837 kubelet[2082]: I1213 04:53:51.129242 2082 state_mem.go:36] "Initialized new in-memory state store" Dec 13 04:53:51.129837 kubelet[2082]: I1213 04:53:51.129424 2082 kubelet.go:396] "Attempting to sync node with API server" Dec 13 04:53:51.129837 kubelet[2082]: I1213 04:53:51.129463 2082 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 04:53:51.129837 kubelet[2082]: I1213 04:53:51.129531 2082 kubelet.go:312] "Adding apiserver pod source" Dec 13 04:53:51.129837 kubelet[2082]: I1213 04:53:51.129581 2082 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 04:53:51.132162 kubelet[2082]: E1213 04:53:51.132137 2082 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:53:51.132595 kubelet[2082]: E1213 04:53:51.132450 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:53:51.133858 kubelet[2082]: I1213 04:53:51.133529 2082 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 04:53:51.138947 kubelet[2082]: I1213 04:53:51.137092 2082 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 04:53:51.138947 kubelet[2082]: W1213 04:53:51.137252 2082 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 04:53:51.138947 kubelet[2082]: I1213 04:53:51.138241 2082 server.go:1256] "Started kubelet" Dec 13 04:53:51.140074 kubelet[2082]: I1213 04:53:51.140002 2082 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 04:53:51.146765 kubelet[2082]: W1213 04:53:51.146727 2082 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "10.244.15.10" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 04:53:51.147035 kubelet[2082]: E1213 04:53:51.146999 2082 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.244.15.10" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 04:53:51.147276 kubelet[2082]: W1213 04:53:51.147252 2082 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 04:53:51.147398 kubelet[2082]: E1213 04:53:51.147376 2082 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 04:53:51.150147 kubelet[2082]: I1213 04:53:51.150119 2082 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 04:53:51.153181 kubelet[2082]: I1213 04:53:51.153155 2082 server.go:461] "Adding debug handlers to kubelet server" Dec 13 04:53:51.154913 kubelet[2082]: I1213 04:53:51.154876 2082 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 04:53:51.155400 kubelet[2082]: I1213 04:53:51.155381 2082 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 04:53:51.157456 kubelet[2082]: E1213 04:53:51.157428 2082 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.244.15.10.1810a3810934d000 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.244.15.10,UID:10.244.15.10,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.244.15.10,},FirstTimestamp:2024-12-13 04:53:51.138205696 +0000 UTC m=+0.726831438,LastTimestamp:2024-12-13 04:53:51.138205696 +0000 UTC m=+0.726831438,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.244.15.10,}" Dec 13 04:53:51.159225 kubelet[2082]: I1213 04:53:51.159203 2082 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 04:53:51.160866 kubelet[2082]: I1213 04:53:51.160844 2082 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 04:53:51.161743 kubelet[2082]: I1213 04:53:51.161529 2082 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 04:53:51.167738 kubelet[2082]: E1213 04:53:51.167707 2082 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 04:53:51.171484 kubelet[2082]: I1213 04:53:51.171447 2082 factory.go:221] Registration of the containerd container factory successfully Dec 13 04:53:51.171484 kubelet[2082]: I1213 04:53:51.171480 2082 factory.go:221] Registration of the systemd container factory successfully Dec 13 04:53:51.174477 kubelet[2082]: I1213 04:53:51.174437 2082 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 04:53:51.193118 kubelet[2082]: E1213 04:53:51.192575 2082 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.244.15.10\" not found" node="10.244.15.10" Dec 13 04:53:51.226505 kubelet[2082]: I1213 04:53:51.226469 2082 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 04:53:51.226505 kubelet[2082]: I1213 04:53:51.226501 2082 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 04:53:51.226751 kubelet[2082]: I1213 04:53:51.226537 2082 state_mem.go:36] "Initialized new in-memory state store" Dec 13 04:53:51.232685 kubelet[2082]: I1213 04:53:51.232650 2082 policy_none.go:49] "None policy: Start" Dec 13 04:53:51.234326 kubelet[2082]: I1213 04:53:51.234283 2082 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 04:53:51.234778 kubelet[2082]: I1213 04:53:51.234752 2082 state_mem.go:35] "Initializing new in-memory state store" Dec 13 04:53:51.247503 kubelet[2082]: I1213 04:53:51.245644 2082 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 04:53:51.247503 kubelet[2082]: I1213 04:53:51.246255 2082 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 04:53:51.252430 kubelet[2082]: E1213 04:53:51.252365 2082 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.244.15.10\" not found" Dec 13 04:53:51.262296 kubelet[2082]: I1213 04:53:51.261382 2082 kubelet_node_status.go:73] "Attempting to register node" node="10.244.15.10" Dec 13 04:53:51.267891 kubelet[2082]: I1213 04:53:51.267826 2082 kubelet_node_status.go:76] "Successfully registered node" node="10.244.15.10" Dec 13 04:53:51.280569 kubelet[2082]: I1213 04:53:51.280518 2082 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 04:53:51.280995 kubelet[2082]: E1213 04:53:51.280933 2082 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.15.10\" not found" Dec 13 04:53:51.282086 kubelet[2082]: I1213 04:53:51.282057 2082 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 04:53:51.282163 kubelet[2082]: I1213 04:53:51.282117 2082 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 04:53:51.282163 kubelet[2082]: I1213 04:53:51.282155 2082 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 04:53:51.282376 kubelet[2082]: E1213 04:53:51.282290 2082 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 04:53:51.381602 kubelet[2082]: E1213 04:53:51.381400 2082 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.15.10\" not found" Dec 13 04:53:51.482179 kubelet[2082]: E1213 04:53:51.482108 2082 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.15.10\" not found" Dec 13 04:53:51.583075 kubelet[2082]: E1213 04:53:51.582908 2082 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.15.10\" not found" Dec 13 04:53:51.684166 kubelet[2082]: E1213 04:53:51.683983 2082 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.15.10\" not found" Dec 13 04:53:51.785056 kubelet[2082]: E1213 04:53:51.784943 2082 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.15.10\" not found" Dec 13 04:53:51.886181 kubelet[2082]: E1213 04:53:51.886109 2082 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.15.10\" not found" Dec 13 04:53:51.938756 sudo[1919]: pam_unix(sudo:session): session closed for user root Dec 13 04:53:51.987090 kubelet[2082]: E1213 04:53:51.986978 2082 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.15.10\" not found" Dec 13 04:53:52.084479 sshd[1915]: pam_unix(sshd:session): session closed for user core Dec 13 04:53:52.087998 kubelet[2082]: E1213 04:53:52.087945 2082 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.15.10\" not found" Dec 13 04:53:52.089344 systemd[1]: sshd@8-10.244.15.10:22-147.75.109.163:53958.service: Deactivated successfully. Dec 13 04:53:52.091491 kubelet[2082]: I1213 04:53:52.091112 2082 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 04:53:52.091491 kubelet[2082]: W1213 04:53:52.091332 2082 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 04:53:52.091491 kubelet[2082]: W1213 04:53:52.091385 2082 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 04:53:52.093983 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 04:53:52.096244 systemd-logind[1596]: Session 11 logged out. Waiting for processes to exit. Dec 13 04:53:52.098598 systemd-logind[1596]: Removed session 11. Dec 13 04:53:52.132728 kubelet[2082]: E1213 04:53:52.132645 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:53:52.188619 kubelet[2082]: E1213 04:53:52.188552 2082 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.15.10\" not found" Dec 13 04:53:52.289154 kubelet[2082]: E1213 04:53:52.289094 2082 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.15.10\" not found" Dec 13 04:53:52.389901 kubelet[2082]: E1213 04:53:52.389826 2082 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.15.10\" not found" Dec 13 04:53:52.490105 kubelet[2082]: E1213 04:53:52.490030 2082 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.15.10\" not found" Dec 13 04:53:52.593046 kubelet[2082]: I1213 04:53:52.591863 2082 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 04:53:52.593245 containerd[1621]: time="2024-12-13T04:53:52.593124255Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 04:53:52.594842 kubelet[2082]: I1213 04:53:52.593841 2082 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 04:53:53.133288 kubelet[2082]: I1213 04:53:53.133206 2082 apiserver.go:52] "Watching apiserver" Dec 13 04:53:53.133551 kubelet[2082]: E1213 04:53:53.133230 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:53:53.139450 kubelet[2082]: I1213 04:53:53.139392 2082 topology_manager.go:215] "Topology Admit Handler" podUID="1678db66-9777-422c-b18b-75a7c2f9053f" podNamespace="kube-system" podName="cilium-9vv2l" Dec 13 04:53:53.141295 kubelet[2082]: I1213 04:53:53.139674 2082 topology_manager.go:215] "Topology Admit Handler" podUID="0f31fcbb-e1bc-40c8-a7e5-24decbf8e831" podNamespace="kube-system" podName="kube-proxy-qwk7p" Dec 13 04:53:53.163198 kubelet[2082]: I1213 04:53:53.163149 2082 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 04:53:53.175447 kubelet[2082]: I1213 04:53:53.174397 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5ftt\" (UniqueName: \"kubernetes.io/projected/0f31fcbb-e1bc-40c8-a7e5-24decbf8e831-kube-api-access-b5ftt\") pod \"kube-proxy-qwk7p\" (UID: \"0f31fcbb-e1bc-40c8-a7e5-24decbf8e831\") " pod="kube-system/kube-proxy-qwk7p" Dec 13 04:53:53.175447 kubelet[2082]: I1213 04:53:53.174471 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-bpf-maps\") pod \"cilium-9vv2l\" (UID: \"1678db66-9777-422c-b18b-75a7c2f9053f\") " pod="kube-system/cilium-9vv2l" Dec 13 04:53:53.175447 kubelet[2082]: I1213 04:53:53.174508 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-cni-path\") pod \"cilium-9vv2l\" (UID: \"1678db66-9777-422c-b18b-75a7c2f9053f\") " pod="kube-system/cilium-9vv2l" Dec 13 04:53:53.175447 kubelet[2082]: I1213 04:53:53.174539 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1678db66-9777-422c-b18b-75a7c2f9053f-cilium-config-path\") pod \"cilium-9vv2l\" (UID: \"1678db66-9777-422c-b18b-75a7c2f9053f\") " pod="kube-system/cilium-9vv2l" Dec 13 04:53:53.175447 kubelet[2082]: I1213 04:53:53.174570 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f31fcbb-e1bc-40c8-a7e5-24decbf8e831-lib-modules\") pod \"kube-proxy-qwk7p\" (UID: \"0f31fcbb-e1bc-40c8-a7e5-24decbf8e831\") " pod="kube-system/kube-proxy-qwk7p" Dec 13 04:53:53.175447 kubelet[2082]: I1213 04:53:53.174603 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-cilium-run\") pod \"cilium-9vv2l\" (UID: \"1678db66-9777-422c-b18b-75a7c2f9053f\") " pod="kube-system/cilium-9vv2l" Dec 13 04:53:53.175873 kubelet[2082]: I1213 04:53:53.174636 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-etc-cni-netd\") pod \"cilium-9vv2l\" (UID: \"1678db66-9777-422c-b18b-75a7c2f9053f\") " pod="kube-system/cilium-9vv2l" Dec 13 04:53:53.175873 kubelet[2082]: I1213 04:53:53.174667 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1678db66-9777-422c-b18b-75a7c2f9053f-clustermesh-secrets\") pod \"cilium-9vv2l\" (UID: \"1678db66-9777-422c-b18b-75a7c2f9053f\") " pod="kube-system/cilium-9vv2l" Dec 13 04:53:53.175873 kubelet[2082]: I1213 04:53:53.174698 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-host-proc-sys-kernel\") pod \"cilium-9vv2l\" (UID: \"1678db66-9777-422c-b18b-75a7c2f9053f\") " pod="kube-system/cilium-9vv2l" Dec 13 04:53:53.175873 kubelet[2082]: I1213 04:53:53.174729 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vx5q\" (UniqueName: \"kubernetes.io/projected/1678db66-9777-422c-b18b-75a7c2f9053f-kube-api-access-2vx5q\") pod \"cilium-9vv2l\" (UID: \"1678db66-9777-422c-b18b-75a7c2f9053f\") " pod="kube-system/cilium-9vv2l" Dec 13 04:53:53.175873 kubelet[2082]: I1213 04:53:53.174758 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0f31fcbb-e1bc-40c8-a7e5-24decbf8e831-kube-proxy\") pod \"kube-proxy-qwk7p\" (UID: \"0f31fcbb-e1bc-40c8-a7e5-24decbf8e831\") " pod="kube-system/kube-proxy-qwk7p" Dec 13 04:53:53.176166 kubelet[2082]: I1213 04:53:53.174832 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-cilium-cgroup\") pod \"cilium-9vv2l\" (UID: \"1678db66-9777-422c-b18b-75a7c2f9053f\") " pod="kube-system/cilium-9vv2l" Dec 13 04:53:53.176166 kubelet[2082]: I1213 04:53:53.174866 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-xtables-lock\") pod \"cilium-9vv2l\" (UID: \"1678db66-9777-422c-b18b-75a7c2f9053f\") " pod="kube-system/cilium-9vv2l" Dec 13 04:53:53.176166 kubelet[2082]: I1213 04:53:53.174899 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-host-proc-sys-net\") pod \"cilium-9vv2l\" (UID: \"1678db66-9777-422c-b18b-75a7c2f9053f\") " pod="kube-system/cilium-9vv2l" Dec 13 04:53:53.176166 kubelet[2082]: I1213 04:53:53.174945 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1678db66-9777-422c-b18b-75a7c2f9053f-hubble-tls\") pod \"cilium-9vv2l\" (UID: \"1678db66-9777-422c-b18b-75a7c2f9053f\") " pod="kube-system/cilium-9vv2l" Dec 13 04:53:53.176166 kubelet[2082]: I1213 04:53:53.174981 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f31fcbb-e1bc-40c8-a7e5-24decbf8e831-xtables-lock\") pod \"kube-proxy-qwk7p\" (UID: \"0f31fcbb-e1bc-40c8-a7e5-24decbf8e831\") " pod="kube-system/kube-proxy-qwk7p" Dec 13 04:53:53.176166 kubelet[2082]: I1213 04:53:53.175027 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-hostproc\") pod \"cilium-9vv2l\" (UID: \"1678db66-9777-422c-b18b-75a7c2f9053f\") " pod="kube-system/cilium-9vv2l" Dec 13 04:53:53.176444 kubelet[2082]: I1213 04:53:53.175090 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-lib-modules\") pod \"cilium-9vv2l\" (UID: \"1678db66-9777-422c-b18b-75a7c2f9053f\") " pod="kube-system/cilium-9vv2l" Dec 13 04:53:53.448217 containerd[1621]: time="2024-12-13T04:53:53.446999098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9vv2l,Uid:1678db66-9777-422c-b18b-75a7c2f9053f,Namespace:kube-system,Attempt:0,}" Dec 13 04:53:53.450869 containerd[1621]: time="2024-12-13T04:53:53.450824396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qwk7p,Uid:0f31fcbb-e1bc-40c8-a7e5-24decbf8e831,Namespace:kube-system,Attempt:0,}" Dec 13 04:53:54.134565 kubelet[2082]: E1213 04:53:54.134505 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:53:54.222762 containerd[1621]: time="2024-12-13T04:53:54.221254222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 04:53:54.224329 containerd[1621]: time="2024-12-13T04:53:54.224279951Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Dec 13 04:53:54.226228 containerd[1621]: time="2024-12-13T04:53:54.226187281Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 04:53:54.227719 containerd[1621]: time="2024-12-13T04:53:54.227487032Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 04:53:54.228596 containerd[1621]: time="2024-12-13T04:53:54.228507551Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 04:53:54.231934 containerd[1621]: time="2024-12-13T04:53:54.231860716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 04:53:54.233071 containerd[1621]: time="2024-12-13T04:53:54.233009545Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 785.750665ms" Dec 13 04:53:54.238142 containerd[1621]: time="2024-12-13T04:53:54.237818335Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 786.902934ms" Dec 13 04:53:54.296312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1010540520.mount: Deactivated successfully. Dec 13 04:53:54.405807 containerd[1621]: time="2024-12-13T04:53:54.405472944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:53:54.406066 containerd[1621]: time="2024-12-13T04:53:54.405582382Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:53:54.406066 containerd[1621]: time="2024-12-13T04:53:54.405606099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:53:54.406066 containerd[1621]: time="2024-12-13T04:53:54.405765232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:53:54.410669 containerd[1621]: time="2024-12-13T04:53:54.409301477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:53:54.410669 containerd[1621]: time="2024-12-13T04:53:54.409391784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:53:54.410669 containerd[1621]: time="2024-12-13T04:53:54.409417608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:53:54.410669 containerd[1621]: time="2024-12-13T04:53:54.409553459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:53:54.565171 containerd[1621]: time="2024-12-13T04:53:54.565092245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9vv2l,Uid:1678db66-9777-422c-b18b-75a7c2f9053f,Namespace:kube-system,Attempt:0,} returns sandbox id \"869f272628298873d1da6c56db8482ed08d960868c5cd7b55775bc9708966810\"" Dec 13 04:53:54.570488 containerd[1621]: time="2024-12-13T04:53:54.570316574Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 04:53:54.575792 containerd[1621]: time="2024-12-13T04:53:54.575748806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qwk7p,Uid:0f31fcbb-e1bc-40c8-a7e5-24decbf8e831,Namespace:kube-system,Attempt:0,} returns sandbox id \"53daae8f73ee83dc631d26efd7abc421e1ddacae23407771c09a1208c1e2f0b1\"" Dec 13 04:53:55.135771 kubelet[2082]: E1213 04:53:55.135689 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:53:56.137236 kubelet[2082]: E1213 04:53:56.137145 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:53:57.137956 kubelet[2082]: E1213 04:53:57.137882 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:53:58.138700 kubelet[2082]: E1213 04:53:58.138637 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:53:59.138878 kubelet[2082]: E1213 04:53:59.138814 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:00.139874 kubelet[2082]: E1213 04:54:00.139787 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:01.140685 kubelet[2082]: E1213 04:54:01.140647 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:02.141499 kubelet[2082]: E1213 04:54:02.141374 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:03.142053 kubelet[2082]: E1213 04:54:03.141979 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:03.722271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3839258359.mount: Deactivated successfully. Dec 13 04:54:04.143495 kubelet[2082]: E1213 04:54:04.143079 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:04.184329 update_engine[1602]: I20241213 04:54:04.183809 1602 update_attempter.cc:509] Updating boot flags... Dec 13 04:54:04.253783 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2233) Dec 13 04:54:04.361159 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2234) Dec 13 04:54:04.457835 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2234) Dec 13 04:54:05.144535 kubelet[2082]: E1213 04:54:05.144468 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:06.146178 kubelet[2082]: E1213 04:54:06.146071 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:06.984074 containerd[1621]: time="2024-12-13T04:54:06.983919398Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:54:06.985284 containerd[1621]: time="2024-12-13T04:54:06.985230854Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735331" Dec 13 04:54:06.986048 containerd[1621]: time="2024-12-13T04:54:06.985942241Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:54:06.988726 containerd[1621]: time="2024-12-13T04:54:06.988503443Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.418139509s" Dec 13 04:54:06.988726 containerd[1621]: time="2024-12-13T04:54:06.988551151Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 04:54:06.990109 containerd[1621]: time="2024-12-13T04:54:06.990071696Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 04:54:06.991434 containerd[1621]: time="2024-12-13T04:54:06.991288954Z" level=info msg="CreateContainer within sandbox \"869f272628298873d1da6c56db8482ed08d960868c5cd7b55775bc9708966810\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 04:54:07.018311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1921417594.mount: Deactivated successfully. Dec 13 04:54:07.021460 containerd[1621]: time="2024-12-13T04:54:07.021391008Z" level=info msg="CreateContainer within sandbox \"869f272628298873d1da6c56db8482ed08d960868c5cd7b55775bc9708966810\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d13a3e6110d6d05c22719e145a76dcd9819041d5d3eec7ced0988245e31f1b0b\"" Dec 13 04:54:07.022987 containerd[1621]: time="2024-12-13T04:54:07.022326764Z" level=info msg="StartContainer for \"d13a3e6110d6d05c22719e145a76dcd9819041d5d3eec7ced0988245e31f1b0b\"" Dec 13 04:54:07.063820 systemd[1]: run-containerd-runc-k8s.io-d13a3e6110d6d05c22719e145a76dcd9819041d5d3eec7ced0988245e31f1b0b-runc.HoMzNv.mount: Deactivated successfully. Dec 13 04:54:07.103852 containerd[1621]: time="2024-12-13T04:54:07.103759304Z" level=info msg="StartContainer for \"d13a3e6110d6d05c22719e145a76dcd9819041d5d3eec7ced0988245e31f1b0b\" returns successfully" Dec 13 04:54:07.147125 kubelet[2082]: E1213 04:54:07.147032 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:07.348905 containerd[1621]: time="2024-12-13T04:54:07.348578566Z" level=info msg="shim disconnected" id=d13a3e6110d6d05c22719e145a76dcd9819041d5d3eec7ced0988245e31f1b0b namespace=k8s.io Dec 13 04:54:07.348905 containerd[1621]: time="2024-12-13T04:54:07.348695217Z" level=warning msg="cleaning up after shim disconnected" id=d13a3e6110d6d05c22719e145a76dcd9819041d5d3eec7ced0988245e31f1b0b namespace=k8s.io Dec 13 04:54:07.348905 containerd[1621]: time="2024-12-13T04:54:07.348716681Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 04:54:08.014390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d13a3e6110d6d05c22719e145a76dcd9819041d5d3eec7ced0988245e31f1b0b-rootfs.mount: Deactivated successfully. Dec 13 04:54:08.148083 kubelet[2082]: E1213 04:54:08.147944 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:08.350964 containerd[1621]: time="2024-12-13T04:54:08.350756186Z" level=info msg="CreateContainer within sandbox \"869f272628298873d1da6c56db8482ed08d960868c5cd7b55775bc9708966810\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 04:54:08.370810 containerd[1621]: time="2024-12-13T04:54:08.370721763Z" level=info msg="CreateContainer within sandbox \"869f272628298873d1da6c56db8482ed08d960868c5cd7b55775bc9708966810\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8723165c831443dbfb15539d4b094b6575aa221f8b23bceafe2b07762630627f\"" Dec 13 04:54:08.374040 containerd[1621]: time="2024-12-13T04:54:08.373462071Z" level=info msg="StartContainer for \"8723165c831443dbfb15539d4b094b6575aa221f8b23bceafe2b07762630627f\"" Dec 13 04:54:08.491518 containerd[1621]: time="2024-12-13T04:54:08.491442319Z" level=info msg="StartContainer for \"8723165c831443dbfb15539d4b094b6575aa221f8b23bceafe2b07762630627f\" returns successfully" Dec 13 04:54:08.508525 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 04:54:08.509963 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 04:54:08.510115 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 04:54:08.522472 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 04:54:08.577157 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 04:54:08.657619 containerd[1621]: time="2024-12-13T04:54:08.656760399Z" level=info msg="shim disconnected" id=8723165c831443dbfb15539d4b094b6575aa221f8b23bceafe2b07762630627f namespace=k8s.io Dec 13 04:54:08.657619 containerd[1621]: time="2024-12-13T04:54:08.656848640Z" level=warning msg="cleaning up after shim disconnected" id=8723165c831443dbfb15539d4b094b6575aa221f8b23bceafe2b07762630627f namespace=k8s.io Dec 13 04:54:08.657619 containerd[1621]: time="2024-12-13T04:54:08.656865229Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 04:54:09.011926 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8723165c831443dbfb15539d4b094b6575aa221f8b23bceafe2b07762630627f-rootfs.mount: Deactivated successfully. Dec 13 04:54:09.013201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount293311361.mount: Deactivated successfully. Dec 13 04:54:09.148551 kubelet[2082]: E1213 04:54:09.148486 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:09.356834 containerd[1621]: time="2024-12-13T04:54:09.355942073Z" level=info msg="CreateContainer within sandbox \"869f272628298873d1da6c56db8482ed08d960868c5cd7b55775bc9708966810\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 04:54:09.399200 containerd[1621]: time="2024-12-13T04:54:09.397560618Z" level=info msg="CreateContainer within sandbox \"869f272628298873d1da6c56db8482ed08d960868c5cd7b55775bc9708966810\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9465fd4ceba87953955a02a2fd3dd499e0e8c63b823a1122e5a34ed895600df5\"" Dec 13 04:54:09.400081 containerd[1621]: time="2024-12-13T04:54:09.399917549Z" level=info msg="StartContainer for \"9465fd4ceba87953955a02a2fd3dd499e0e8c63b823a1122e5a34ed895600df5\"" Dec 13 04:54:09.506103 containerd[1621]: time="2024-12-13T04:54:09.506007990Z" level=info msg="StartContainer for \"9465fd4ceba87953955a02a2fd3dd499e0e8c63b823a1122e5a34ed895600df5\" returns successfully" Dec 13 04:54:09.703955 containerd[1621]: time="2024-12-13T04:54:09.703751760Z" level=info msg="shim disconnected" id=9465fd4ceba87953955a02a2fd3dd499e0e8c63b823a1122e5a34ed895600df5 namespace=k8s.io Dec 13 04:54:09.703955 containerd[1621]: time="2024-12-13T04:54:09.703918483Z" level=warning msg="cleaning up after shim disconnected" id=9465fd4ceba87953955a02a2fd3dd499e0e8c63b823a1122e5a34ed895600df5 namespace=k8s.io Dec 13 04:54:09.704589 containerd[1621]: time="2024-12-13T04:54:09.703938371Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 04:54:09.707092 containerd[1621]: time="2024-12-13T04:54:09.705832257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:54:09.707092 containerd[1621]: time="2024-12-13T04:54:09.706907631Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619966" Dec 13 04:54:09.708463 containerd[1621]: time="2024-12-13T04:54:09.708416403Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:54:09.712532 containerd[1621]: time="2024-12-13T04:54:09.712489296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:54:09.713673 containerd[1621]: time="2024-12-13T04:54:09.713637307Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 2.723518835s" Dec 13 04:54:09.713763 containerd[1621]: time="2024-12-13T04:54:09.713689067Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 04:54:09.718182 containerd[1621]: time="2024-12-13T04:54:09.718130752Z" level=info msg="CreateContainer within sandbox \"53daae8f73ee83dc631d26efd7abc421e1ddacae23407771c09a1208c1e2f0b1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 04:54:09.740219 containerd[1621]: time="2024-12-13T04:54:09.740142092Z" level=info msg="CreateContainer within sandbox \"53daae8f73ee83dc631d26efd7abc421e1ddacae23407771c09a1208c1e2f0b1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1006a43b1c918cb43e04d2232a255cb502510832659e02ab215677f227dcf213\"" Dec 13 04:54:09.740783 containerd[1621]: time="2024-12-13T04:54:09.740751289Z" level=info msg="StartContainer for \"1006a43b1c918cb43e04d2232a255cb502510832659e02ab215677f227dcf213\"" Dec 13 04:54:09.822938 containerd[1621]: time="2024-12-13T04:54:09.822727816Z" level=info msg="StartContainer for \"1006a43b1c918cb43e04d2232a255cb502510832659e02ab215677f227dcf213\" returns successfully" Dec 13 04:54:10.013551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9465fd4ceba87953955a02a2fd3dd499e0e8c63b823a1122e5a34ed895600df5-rootfs.mount: Deactivated successfully. Dec 13 04:54:10.151066 kubelet[2082]: E1213 04:54:10.148786 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:10.362904 containerd[1621]: time="2024-12-13T04:54:10.362147227Z" level=info msg="CreateContainer within sandbox \"869f272628298873d1da6c56db8482ed08d960868c5cd7b55775bc9708966810\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 04:54:10.379516 containerd[1621]: time="2024-12-13T04:54:10.379368504Z" level=info msg="CreateContainer within sandbox \"869f272628298873d1da6c56db8482ed08d960868c5cd7b55775bc9708966810\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"89a1fb6f6559a45e2e9a1ccd1a6c92b8142f3c4146ff86d949bf88f35397f05c\"" Dec 13 04:54:10.380228 containerd[1621]: time="2024-12-13T04:54:10.379946262Z" level=info msg="StartContainer for \"89a1fb6f6559a45e2e9a1ccd1a6c92b8142f3c4146ff86d949bf88f35397f05c\"" Dec 13 04:54:10.462609 containerd[1621]: time="2024-12-13T04:54:10.459742858Z" level=info msg="StartContainer for \"89a1fb6f6559a45e2e9a1ccd1a6c92b8142f3c4146ff86d949bf88f35397f05c\" returns successfully" Dec 13 04:54:10.491435 containerd[1621]: time="2024-12-13T04:54:10.491288558Z" level=info msg="shim disconnected" id=89a1fb6f6559a45e2e9a1ccd1a6c92b8142f3c4146ff86d949bf88f35397f05c namespace=k8s.io Dec 13 04:54:10.492113 containerd[1621]: time="2024-12-13T04:54:10.491781834Z" level=warning msg="cleaning up after shim disconnected" id=89a1fb6f6559a45e2e9a1ccd1a6c92b8142f3c4146ff86d949bf88f35397f05c namespace=k8s.io Dec 13 04:54:10.492113 containerd[1621]: time="2024-12-13T04:54:10.491808517Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 04:54:11.012673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89a1fb6f6559a45e2e9a1ccd1a6c92b8142f3c4146ff86d949bf88f35397f05c-rootfs.mount: Deactivated successfully. Dec 13 04:54:11.130240 kubelet[2082]: E1213 04:54:11.130188 2082 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:11.149423 kubelet[2082]: E1213 04:54:11.149372 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:11.372816 containerd[1621]: time="2024-12-13T04:54:11.372650553Z" level=info msg="CreateContainer within sandbox \"869f272628298873d1da6c56db8482ed08d960868c5cd7b55775bc9708966810\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 04:54:11.389065 containerd[1621]: time="2024-12-13T04:54:11.388988447Z" level=info msg="CreateContainer within sandbox \"869f272628298873d1da6c56db8482ed08d960868c5cd7b55775bc9708966810\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9f2cd45fc79156d9a9bdf6ce7e23c39f941c7dd12784ff85fac62900073664d6\"" Dec 13 04:54:11.391051 containerd[1621]: time="2024-12-13T04:54:11.389762345Z" level=info msg="StartContainer for \"9f2cd45fc79156d9a9bdf6ce7e23c39f941c7dd12784ff85fac62900073664d6\"" Dec 13 04:54:11.393259 kubelet[2082]: I1213 04:54:11.393227 2082 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-qwk7p" podStartSLOduration=5.256197558 podStartE2EDuration="20.393134539s" podCreationTimestamp="2024-12-13 04:53:51 +0000 UTC" firstStartedPulling="2024-12-13 04:53:54.577373737 +0000 UTC m=+4.165999471" lastFinishedPulling="2024-12-13 04:54:09.714310707 +0000 UTC m=+19.302936452" observedRunningTime="2024-12-13 04:54:10.39461501 +0000 UTC m=+19.983240758" watchObservedRunningTime="2024-12-13 04:54:11.393134539 +0000 UTC m=+20.981760291" Dec 13 04:54:11.475959 containerd[1621]: time="2024-12-13T04:54:11.475872787Z" level=info msg="StartContainer for \"9f2cd45fc79156d9a9bdf6ce7e23c39f941c7dd12784ff85fac62900073664d6\" returns successfully" Dec 13 04:54:11.603043 kubelet[2082]: I1213 04:54:11.600638 2082 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 04:54:12.068045 kernel: Initializing XFRM netlink socket Dec 13 04:54:12.150620 kubelet[2082]: E1213 04:54:12.150519 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:12.399670 kubelet[2082]: I1213 04:54:12.399455 2082 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-9vv2l" podStartSLOduration=8.979259095 podStartE2EDuration="21.399405518s" podCreationTimestamp="2024-12-13 04:53:51 +0000 UTC" firstStartedPulling="2024-12-13 04:53:54.569043978 +0000 UTC m=+4.157669719" lastFinishedPulling="2024-12-13 04:54:06.989190401 +0000 UTC m=+16.577816142" observedRunningTime="2024-12-13 04:54:12.398732357 +0000 UTC m=+21.987358122" watchObservedRunningTime="2024-12-13 04:54:12.399405518 +0000 UTC m=+21.988031266" Dec 13 04:54:13.151698 kubelet[2082]: E1213 04:54:13.151594 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:13.800495 systemd-networkd[1253]: cilium_host: Link UP Dec 13 04:54:13.800762 systemd-networkd[1253]: cilium_net: Link UP Dec 13 04:54:13.801099 systemd-networkd[1253]: cilium_net: Gained carrier Dec 13 04:54:13.803832 systemd-networkd[1253]: cilium_host: Gained carrier Dec 13 04:54:13.965173 systemd-networkd[1253]: cilium_vxlan: Link UP Dec 13 04:54:13.966128 systemd-networkd[1253]: cilium_vxlan: Gained carrier Dec 13 04:54:14.152739 kubelet[2082]: E1213 04:54:14.152510 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:14.327200 systemd-networkd[1253]: cilium_host: Gained IPv6LL Dec 13 04:54:14.359119 kernel: NET: Registered PF_ALG protocol family Dec 13 04:54:14.775323 systemd-networkd[1253]: cilium_net: Gained IPv6LL Dec 13 04:54:15.153981 kubelet[2082]: E1213 04:54:15.153382 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:15.375733 systemd-networkd[1253]: lxc_health: Link UP Dec 13 04:54:15.385104 systemd-networkd[1253]: lxc_health: Gained carrier Dec 13 04:54:15.415464 systemd-networkd[1253]: cilium_vxlan: Gained IPv6LL Dec 13 04:54:16.154206 kubelet[2082]: E1213 04:54:16.154131 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:16.488046 kubelet[2082]: I1213 04:54:16.485155 2082 topology_manager.go:215] "Topology Admit Handler" podUID="73a7470f-e609-4ad9-8ed4-5bec9cffd422" podNamespace="default" podName="nginx-deployment-6d5f899847-7cqx4" Dec 13 04:54:16.630999 kubelet[2082]: I1213 04:54:16.630917 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b6fj\" (UniqueName: \"kubernetes.io/projected/73a7470f-e609-4ad9-8ed4-5bec9cffd422-kube-api-access-7b6fj\") pod \"nginx-deployment-6d5f899847-7cqx4\" (UID: \"73a7470f-e609-4ad9-8ed4-5bec9cffd422\") " pod="default/nginx-deployment-6d5f899847-7cqx4" Dec 13 04:54:16.797473 containerd[1621]: time="2024-12-13T04:54:16.797226288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-7cqx4,Uid:73a7470f-e609-4ad9-8ed4-5bec9cffd422,Namespace:default,Attempt:0,}" Dec 13 04:54:16.888425 systemd-networkd[1253]: lxc_health: Gained IPv6LL Dec 13 04:54:16.901071 systemd-networkd[1253]: lxc56c44246e455: Link UP Dec 13 04:54:16.925688 kernel: eth0: renamed from tmpb49c3 Dec 13 04:54:16.939338 systemd-networkd[1253]: lxc56c44246e455: Gained carrier Dec 13 04:54:17.155562 kubelet[2082]: E1213 04:54:17.155194 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:18.155759 kubelet[2082]: E1213 04:54:18.155672 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:18.807532 systemd-networkd[1253]: lxc56c44246e455: Gained IPv6LL Dec 13 04:54:19.157169 kubelet[2082]: E1213 04:54:19.156848 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:20.157280 kubelet[2082]: E1213 04:54:20.157185 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:21.158065 kubelet[2082]: E1213 04:54:21.157897 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:22.057460 containerd[1621]: time="2024-12-13T04:54:22.057105820Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:54:22.057460 containerd[1621]: time="2024-12-13T04:54:22.057215051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:54:22.057460 containerd[1621]: time="2024-12-13T04:54:22.057233158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:54:22.058425 containerd[1621]: time="2024-12-13T04:54:22.057643854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:54:22.158670 kubelet[2082]: E1213 04:54:22.158589 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:22.163036 containerd[1621]: time="2024-12-13T04:54:22.162946805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-7cqx4,Uid:73a7470f-e609-4ad9-8ed4-5bec9cffd422,Namespace:default,Attempt:0,} returns sandbox id \"b49c34af6f14362983a7c7dd5fc9cba40e463fce002010a6277c45effdd9dc5d\"" Dec 13 04:54:22.165530 containerd[1621]: time="2024-12-13T04:54:22.165497589Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 04:54:23.159895 kubelet[2082]: E1213 04:54:23.159819 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:24.160816 kubelet[2082]: E1213 04:54:24.160738 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:25.161737 kubelet[2082]: E1213 04:54:25.161677 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:26.058407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2066259486.mount: Deactivated successfully. Dec 13 04:54:26.162420 kubelet[2082]: E1213 04:54:26.162350 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:27.162958 kubelet[2082]: E1213 04:54:27.162874 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:27.684459 containerd[1621]: time="2024-12-13T04:54:27.684369302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:54:27.687728 containerd[1621]: time="2024-12-13T04:54:27.687681294Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036027" Dec 13 04:54:27.689234 containerd[1621]: time="2024-12-13T04:54:27.689197522Z" level=info msg="ImageCreate event name:\"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:54:27.694132 containerd[1621]: time="2024-12-13T04:54:27.694094167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:54:27.695842 containerd[1621]: time="2024-12-13T04:54:27.695124680Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"71035905\" in 5.529572556s" Dec 13 04:54:27.696576 containerd[1621]: time="2024-12-13T04:54:27.696543920Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 04:54:27.698998 containerd[1621]: time="2024-12-13T04:54:27.698947000Z" level=info msg="CreateContainer within sandbox \"b49c34af6f14362983a7c7dd5fc9cba40e463fce002010a6277c45effdd9dc5d\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 04:54:27.713028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3669872011.mount: Deactivated successfully. Dec 13 04:54:27.719759 containerd[1621]: time="2024-12-13T04:54:27.719611833Z" level=info msg="CreateContainer within sandbox \"b49c34af6f14362983a7c7dd5fc9cba40e463fce002010a6277c45effdd9dc5d\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"6f736c772562670ad02a3bbf5795552f2fb179ca389e4fbf7167b82447f0d033\"" Dec 13 04:54:27.720717 containerd[1621]: time="2024-12-13T04:54:27.720548866Z" level=info msg="StartContainer for \"6f736c772562670ad02a3bbf5795552f2fb179ca389e4fbf7167b82447f0d033\"" Dec 13 04:54:27.798395 containerd[1621]: time="2024-12-13T04:54:27.798269420Z" level=info msg="StartContainer for \"6f736c772562670ad02a3bbf5795552f2fb179ca389e4fbf7167b82447f0d033\" returns successfully" Dec 13 04:54:28.163863 kubelet[2082]: E1213 04:54:28.163742 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:28.445749 kubelet[2082]: I1213 04:54:28.445437 2082 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-7cqx4" podStartSLOduration=6.913154715 podStartE2EDuration="12.445365502s" podCreationTimestamp="2024-12-13 04:54:16 +0000 UTC" firstStartedPulling="2024-12-13 04:54:22.164899753 +0000 UTC m=+31.753525487" lastFinishedPulling="2024-12-13 04:54:27.697110534 +0000 UTC m=+37.285736274" observedRunningTime="2024-12-13 04:54:28.445217922 +0000 UTC m=+38.033843675" watchObservedRunningTime="2024-12-13 04:54:28.445365502 +0000 UTC m=+38.033991249" Dec 13 04:54:29.165058 kubelet[2082]: E1213 04:54:29.164907 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:30.166088 kubelet[2082]: E1213 04:54:30.165912 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:31.129951 kubelet[2082]: E1213 04:54:31.129870 2082 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:31.167226 kubelet[2082]: E1213 04:54:31.167150 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:32.167879 kubelet[2082]: E1213 04:54:32.167784 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:33.168979 kubelet[2082]: E1213 04:54:33.168867 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:34.169661 kubelet[2082]: E1213 04:54:34.169542 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:35.170677 kubelet[2082]: E1213 04:54:35.170550 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:36.171804 kubelet[2082]: E1213 04:54:36.171714 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:37.172532 kubelet[2082]: E1213 04:54:37.172454 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:37.835820 kubelet[2082]: I1213 04:54:37.835650 2082 topology_manager.go:215] "Topology Admit Handler" podUID="65e665fa-1504-4e4d-925b-bf4fe9d5f59c" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 04:54:37.966765 kubelet[2082]: I1213 04:54:37.966653 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/65e665fa-1504-4e4d-925b-bf4fe9d5f59c-data\") pod \"nfs-server-provisioner-0\" (UID: \"65e665fa-1504-4e4d-925b-bf4fe9d5f59c\") " pod="default/nfs-server-provisioner-0" Dec 13 04:54:37.966765 kubelet[2082]: I1213 04:54:37.966779 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjn9n\" (UniqueName: \"kubernetes.io/projected/65e665fa-1504-4e4d-925b-bf4fe9d5f59c-kube-api-access-gjn9n\") pod \"nfs-server-provisioner-0\" (UID: \"65e665fa-1504-4e4d-925b-bf4fe9d5f59c\") " pod="default/nfs-server-provisioner-0" Dec 13 04:54:38.142699 containerd[1621]: time="2024-12-13T04:54:38.142391334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:65e665fa-1504-4e4d-925b-bf4fe9d5f59c,Namespace:default,Attempt:0,}" Dec 13 04:54:38.173280 kubelet[2082]: E1213 04:54:38.173198 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:38.199557 systemd-networkd[1253]: lxc577c93b38e5c: Link UP Dec 13 04:54:38.205838 kernel: eth0: renamed from tmp1383a Dec 13 04:54:38.210750 systemd-networkd[1253]: lxc577c93b38e5c: Gained carrier Dec 13 04:54:38.442627 containerd[1621]: time="2024-12-13T04:54:38.441953657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:54:38.443957 containerd[1621]: time="2024-12-13T04:54:38.442924050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:54:38.444353 containerd[1621]: time="2024-12-13T04:54:38.444114762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:54:38.444353 containerd[1621]: time="2024-12-13T04:54:38.444271738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:54:38.525042 containerd[1621]: time="2024-12-13T04:54:38.524945966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:65e665fa-1504-4e4d-925b-bf4fe9d5f59c,Namespace:default,Attempt:0,} returns sandbox id \"1383abb2f9020567ff0af292d5066fc630db13d6aa8bbbce437e6758a1b04147\"" Dec 13 04:54:38.528197 containerd[1621]: time="2024-12-13T04:54:38.528085972Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 04:54:39.174240 kubelet[2082]: E1213 04:54:39.174160 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:39.543317 systemd-networkd[1253]: lxc577c93b38e5c: Gained IPv6LL Dec 13 04:54:40.174773 kubelet[2082]: E1213 04:54:40.174648 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:41.175506 kubelet[2082]: E1213 04:54:41.175426 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:42.176420 kubelet[2082]: E1213 04:54:42.176333 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:42.838793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount850061125.mount: Deactivated successfully. Dec 13 04:54:43.179224 kubelet[2082]: E1213 04:54:43.177471 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:44.178238 kubelet[2082]: E1213 04:54:44.178169 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:45.179539 kubelet[2082]: E1213 04:54:45.179323 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:45.585520 containerd[1621]: time="2024-12-13T04:54:45.585421892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:54:45.587693 containerd[1621]: time="2024-12-13T04:54:45.587628622Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Dec 13 04:54:45.590114 containerd[1621]: time="2024-12-13T04:54:45.587874971Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:54:45.592884 containerd[1621]: time="2024-12-13T04:54:45.592799995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:54:45.595376 containerd[1621]: time="2024-12-13T04:54:45.595308564Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 7.067161239s" Dec 13 04:54:45.595485 containerd[1621]: time="2024-12-13T04:54:45.595366572Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 04:54:45.599116 containerd[1621]: time="2024-12-13T04:54:45.599079340Z" level=info msg="CreateContainer within sandbox \"1383abb2f9020567ff0af292d5066fc630db13d6aa8bbbce437e6758a1b04147\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 04:54:45.617301 containerd[1621]: time="2024-12-13T04:54:45.617062990Z" level=info msg="CreateContainer within sandbox \"1383abb2f9020567ff0af292d5066fc630db13d6aa8bbbce437e6758a1b04147\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"33e3ec2a5648dd6b2f75a63facc26fd8bfa1fd227a605631771471b42b5b986c\"" Dec 13 04:54:45.618492 containerd[1621]: time="2024-12-13T04:54:45.618419402Z" level=info msg="StartContainer for \"33e3ec2a5648dd6b2f75a63facc26fd8bfa1fd227a605631771471b42b5b986c\"" Dec 13 04:54:45.704547 containerd[1621]: time="2024-12-13T04:54:45.704462375Z" level=info msg="StartContainer for \"33e3ec2a5648dd6b2f75a63facc26fd8bfa1fd227a605631771471b42b5b986c\" returns successfully" Dec 13 04:54:46.180261 kubelet[2082]: E1213 04:54:46.180149 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:47.181093 kubelet[2082]: E1213 04:54:47.180936 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:48.182192 kubelet[2082]: E1213 04:54:48.182068 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:49.182964 kubelet[2082]: E1213 04:54:49.182863 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:50.183842 kubelet[2082]: E1213 04:54:50.183718 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:51.130251 kubelet[2082]: E1213 04:54:51.130143 2082 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:51.184484 kubelet[2082]: E1213 04:54:51.184429 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:52.184845 kubelet[2082]: E1213 04:54:52.184755 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:53.185251 kubelet[2082]: E1213 04:54:53.185170 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:54.186863 kubelet[2082]: E1213 04:54:54.186262 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:54.996306 kubelet[2082]: I1213 04:54:54.996247 2082 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=10.927528716 podStartE2EDuration="17.996145926s" podCreationTimestamp="2024-12-13 04:54:37 +0000 UTC" firstStartedPulling="2024-12-13 04:54:38.52752059 +0000 UTC m=+48.116146333" lastFinishedPulling="2024-12-13 04:54:45.596137792 +0000 UTC m=+55.184763543" observedRunningTime="2024-12-13 04:54:46.518188482 +0000 UTC m=+56.106814237" watchObservedRunningTime="2024-12-13 04:54:54.996145926 +0000 UTC m=+64.584771674" Dec 13 04:54:54.996643 kubelet[2082]: I1213 04:54:54.996476 2082 topology_manager.go:215] "Topology Admit Handler" podUID="87d3969a-ee56-428d-b28a-bef072664e21" podNamespace="default" podName="test-pod-1" Dec 13 04:54:55.168339 kubelet[2082]: I1213 04:54:55.168146 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-12ac0763-a2ec-49ae-a17e-14df108f7d52\" (UniqueName: \"kubernetes.io/nfs/87d3969a-ee56-428d-b28a-bef072664e21-pvc-12ac0763-a2ec-49ae-a17e-14df108f7d52\") pod \"test-pod-1\" (UID: \"87d3969a-ee56-428d-b28a-bef072664e21\") " pod="default/test-pod-1" Dec 13 04:54:55.168339 kubelet[2082]: I1213 04:54:55.168221 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ckwl\" (UniqueName: \"kubernetes.io/projected/87d3969a-ee56-428d-b28a-bef072664e21-kube-api-access-5ckwl\") pod \"test-pod-1\" (UID: \"87d3969a-ee56-428d-b28a-bef072664e21\") " pod="default/test-pod-1" Dec 13 04:54:55.186503 kubelet[2082]: E1213 04:54:55.186449 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:55.316326 kernel: FS-Cache: Loaded Dec 13 04:54:55.405135 kernel: RPC: Registered named UNIX socket transport module. Dec 13 04:54:55.405321 kernel: RPC: Registered udp transport module. Dec 13 04:54:55.405391 kernel: RPC: Registered tcp transport module. Dec 13 04:54:55.406250 kernel: RPC: Registered tcp-with-tls transport module. Dec 13 04:54:55.407307 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 04:54:55.742404 kernel: NFS: Registering the id_resolver key type Dec 13 04:54:55.742647 kernel: Key type id_resolver registered Dec 13 04:54:55.743277 kernel: Key type id_legacy registered Dec 13 04:54:55.794918 nfsidmap[3487]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Dec 13 04:54:55.803595 nfsidmap[3490]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Dec 13 04:54:55.902561 containerd[1621]: time="2024-12-13T04:54:55.902438587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:87d3969a-ee56-428d-b28a-bef072664e21,Namespace:default,Attempt:0,}" Dec 13 04:54:55.940645 systemd-networkd[1253]: lxca79b7fca10e6: Link UP Dec 13 04:54:55.965159 kernel: eth0: renamed from tmpd0380 Dec 13 04:54:55.974403 systemd-networkd[1253]: lxca79b7fca10e6: Gained carrier Dec 13 04:54:56.187229 kubelet[2082]: E1213 04:54:56.187069 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:56.228757 containerd[1621]: time="2024-12-13T04:54:56.228334761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:54:56.228757 containerd[1621]: time="2024-12-13T04:54:56.228470324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:54:56.228757 containerd[1621]: time="2024-12-13T04:54:56.228496204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:54:56.228757 containerd[1621]: time="2024-12-13T04:54:56.228662582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:54:56.310908 containerd[1621]: time="2024-12-13T04:54:56.310816165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:87d3969a-ee56-428d-b28a-bef072664e21,Namespace:default,Attempt:0,} returns sandbox id \"d0380c4d1ce10d168418b86f751641e10f19c89667f8a30822ee4e06bdc5b88c\"" Dec 13 04:54:56.313477 containerd[1621]: time="2024-12-13T04:54:56.313444856Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 04:54:56.702537 containerd[1621]: time="2024-12-13T04:54:56.701919809Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Dec 13 04:54:56.702537 containerd[1621]: time="2024-12-13T04:54:56.702034685Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:54:56.708726 containerd[1621]: time="2024-12-13T04:54:56.708686191Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"71035905\" in 395.197288ms" Dec 13 04:54:56.708837 containerd[1621]: time="2024-12-13T04:54:56.708731779Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 04:54:56.712993 containerd[1621]: time="2024-12-13T04:54:56.712799465Z" level=info msg="CreateContainer within sandbox \"d0380c4d1ce10d168418b86f751641e10f19c89667f8a30822ee4e06bdc5b88c\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 04:54:56.783998 containerd[1621]: time="2024-12-13T04:54:56.783918454Z" level=info msg="CreateContainer within sandbox \"d0380c4d1ce10d168418b86f751641e10f19c89667f8a30822ee4e06bdc5b88c\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"261042120c4725043e1c260cb3d991d0ae03f62e33b8dd11fe26841eac6312d1\"" Dec 13 04:54:56.786171 containerd[1621]: time="2024-12-13T04:54:56.784915426Z" level=info msg="StartContainer for \"261042120c4725043e1c260cb3d991d0ae03f62e33b8dd11fe26841eac6312d1\"" Dec 13 04:54:56.867498 containerd[1621]: time="2024-12-13T04:54:56.861425018Z" level=info msg="StartContainer for \"261042120c4725043e1c260cb3d991d0ae03f62e33b8dd11fe26841eac6312d1\" returns successfully" Dec 13 04:54:57.189059 kubelet[2082]: E1213 04:54:57.188853 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:57.207472 systemd-networkd[1253]: lxca79b7fca10e6: Gained IPv6LL Dec 13 04:54:57.533346 kubelet[2082]: I1213 04:54:57.533298 2082 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.13686397 podStartE2EDuration="18.533237288s" podCreationTimestamp="2024-12-13 04:54:39 +0000 UTC" firstStartedPulling="2024-12-13 04:54:56.312735412 +0000 UTC m=+65.901361152" lastFinishedPulling="2024-12-13 04:54:56.70910873 +0000 UTC m=+66.297734470" observedRunningTime="2024-12-13 04:54:57.53320918 +0000 UTC m=+67.121834933" watchObservedRunningTime="2024-12-13 04:54:57.533237288 +0000 UTC m=+67.121863031" Dec 13 04:54:58.190868 kubelet[2082]: E1213 04:54:58.190799 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:54:59.191489 kubelet[2082]: E1213 04:54:59.191409 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:00.192141 kubelet[2082]: E1213 04:55:00.192051 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:01.192463 kubelet[2082]: E1213 04:55:01.192347 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:02.193320 kubelet[2082]: E1213 04:55:02.193244 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:03.194116 kubelet[2082]: E1213 04:55:03.193996 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:04.195050 kubelet[2082]: E1213 04:55:04.194925 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:05.196230 kubelet[2082]: E1213 04:55:05.196149 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:05.729112 containerd[1621]: time="2024-12-13T04:55:05.729048108Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 04:55:05.756077 containerd[1621]: time="2024-12-13T04:55:05.755975410Z" level=info msg="StopContainer for \"9f2cd45fc79156d9a9bdf6ce7e23c39f941c7dd12784ff85fac62900073664d6\" with timeout 2 (s)" Dec 13 04:55:05.756588 containerd[1621]: time="2024-12-13T04:55:05.756556825Z" level=info msg="Stop container \"9f2cd45fc79156d9a9bdf6ce7e23c39f941c7dd12784ff85fac62900073664d6\" with signal terminated" Dec 13 04:55:05.766458 systemd-networkd[1253]: lxc_health: Link DOWN Dec 13 04:55:05.766470 systemd-networkd[1253]: lxc_health: Lost carrier Dec 13 04:55:05.827617 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f2cd45fc79156d9a9bdf6ce7e23c39f941c7dd12784ff85fac62900073664d6-rootfs.mount: Deactivated successfully. Dec 13 04:55:06.006911 containerd[1621]: time="2024-12-13T04:55:05.980353406Z" level=info msg="shim disconnected" id=9f2cd45fc79156d9a9bdf6ce7e23c39f941c7dd12784ff85fac62900073664d6 namespace=k8s.io Dec 13 04:55:06.006911 containerd[1621]: time="2024-12-13T04:55:06.006515249Z" level=warning msg="cleaning up after shim disconnected" id=9f2cd45fc79156d9a9bdf6ce7e23c39f941c7dd12784ff85fac62900073664d6 namespace=k8s.io Dec 13 04:55:06.006911 containerd[1621]: time="2024-12-13T04:55:06.006545914Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 04:55:06.027486 containerd[1621]: time="2024-12-13T04:55:06.027361942Z" level=info msg="StopContainer for \"9f2cd45fc79156d9a9bdf6ce7e23c39f941c7dd12784ff85fac62900073664d6\" returns successfully" Dec 13 04:55:06.044113 containerd[1621]: time="2024-12-13T04:55:06.043858798Z" level=info msg="StopPodSandbox for \"869f272628298873d1da6c56db8482ed08d960868c5cd7b55775bc9708966810\"" Dec 13 04:55:06.044113 containerd[1621]: time="2024-12-13T04:55:06.043949640Z" level=info msg="Container to stop \"9465fd4ceba87953955a02a2fd3dd499e0e8c63b823a1122e5a34ed895600df5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:55:06.044113 containerd[1621]: time="2024-12-13T04:55:06.043974145Z" level=info msg="Container to stop \"89a1fb6f6559a45e2e9a1ccd1a6c92b8142f3c4146ff86d949bf88f35397f05c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:55:06.044113 containerd[1621]: time="2024-12-13T04:55:06.043989775Z" level=info msg="Container to stop \"9f2cd45fc79156d9a9bdf6ce7e23c39f941c7dd12784ff85fac62900073664d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:55:06.044113 containerd[1621]: time="2024-12-13T04:55:06.044005296Z" level=info msg="Container to stop \"d13a3e6110d6d05c22719e145a76dcd9819041d5d3eec7ced0988245e31f1b0b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:55:06.044113 containerd[1621]: time="2024-12-13T04:55:06.044040295Z" level=info msg="Container to stop \"8723165c831443dbfb15539d4b094b6575aa221f8b23bceafe2b07762630627f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:55:06.047541 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-869f272628298873d1da6c56db8482ed08d960868c5cd7b55775bc9708966810-shm.mount: Deactivated successfully. Dec 13 04:55:06.081558 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-869f272628298873d1da6c56db8482ed08d960868c5cd7b55775bc9708966810-rootfs.mount: Deactivated successfully. Dec 13 04:55:06.083536 containerd[1621]: time="2024-12-13T04:55:06.083192839Z" level=info msg="shim disconnected" id=869f272628298873d1da6c56db8482ed08d960868c5cd7b55775bc9708966810 namespace=k8s.io Dec 13 04:55:06.083536 containerd[1621]: time="2024-12-13T04:55:06.083265523Z" level=warning msg="cleaning up after shim disconnected" id=869f272628298873d1da6c56db8482ed08d960868c5cd7b55775bc9708966810 namespace=k8s.io Dec 13 04:55:06.083536 containerd[1621]: time="2024-12-13T04:55:06.083284300Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 04:55:06.113177 containerd[1621]: time="2024-12-13T04:55:06.112920393Z" level=info msg="TearDown network for sandbox \"869f272628298873d1da6c56db8482ed08d960868c5cd7b55775bc9708966810\" successfully" Dec 13 04:55:06.113177 containerd[1621]: time="2024-12-13T04:55:06.112980173Z" level=info msg="StopPodSandbox for \"869f272628298873d1da6c56db8482ed08d960868c5cd7b55775bc9708966810\" returns successfully" Dec 13 04:55:06.197430 kubelet[2082]: E1213 04:55:06.197254 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:06.242995 kubelet[2082]: I1213 04:55:06.242928 2082 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-cni-path\") pod \"1678db66-9777-422c-b18b-75a7c2f9053f\" (UID: \"1678db66-9777-422c-b18b-75a7c2f9053f\") " Dec 13 04:55:06.242995 kubelet[2082]: I1213 04:55:06.243002 2082 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-cilium-run\") pod \"1678db66-9777-422c-b18b-75a7c2f9053f\" (UID: \"1678db66-9777-422c-b18b-75a7c2f9053f\") " Dec 13 04:55:06.243297 kubelet[2082]: I1213 04:55:06.243082 2082 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1678db66-9777-422c-b18b-75a7c2f9053f-clustermesh-secrets\") pod \"1678db66-9777-422c-b18b-75a7c2f9053f\" (UID: \"1678db66-9777-422c-b18b-75a7c2f9053f\") " Dec 13 04:55:06.243297 kubelet[2082]: I1213 04:55:06.243123 2082 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vx5q\" (UniqueName: \"kubernetes.io/projected/1678db66-9777-422c-b18b-75a7c2f9053f-kube-api-access-2vx5q\") pod \"1678db66-9777-422c-b18b-75a7c2f9053f\" (UID: \"1678db66-9777-422c-b18b-75a7c2f9053f\") " Dec 13 04:55:06.243297 kubelet[2082]: I1213 04:55:06.243151 2082 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-xtables-lock\") pod \"1678db66-9777-422c-b18b-75a7c2f9053f\" (UID: \"1678db66-9777-422c-b18b-75a7c2f9053f\") " Dec 13 04:55:06.243297 kubelet[2082]: I1213 04:55:06.243182 2082 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1678db66-9777-422c-b18b-75a7c2f9053f-hubble-tls\") pod \"1678db66-9777-422c-b18b-75a7c2f9053f\" (UID: \"1678db66-9777-422c-b18b-75a7c2f9053f\") " Dec 13 04:55:06.243297 kubelet[2082]: I1213 04:55:06.243207 2082 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-hostproc\") pod \"1678db66-9777-422c-b18b-75a7c2f9053f\" (UID: \"1678db66-9777-422c-b18b-75a7c2f9053f\") " Dec 13 04:55:06.243297 kubelet[2082]: I1213 04:55:06.243233 2082 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-bpf-maps\") pod \"1678db66-9777-422c-b18b-75a7c2f9053f\" (UID: \"1678db66-9777-422c-b18b-75a7c2f9053f\") " Dec 13 04:55:06.243614 kubelet[2082]: I1213 04:55:06.243266 2082 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1678db66-9777-422c-b18b-75a7c2f9053f-cilium-config-path\") pod \"1678db66-9777-422c-b18b-75a7c2f9053f\" (UID: \"1678db66-9777-422c-b18b-75a7c2f9053f\") " Dec 13 04:55:06.243614 kubelet[2082]: I1213 04:55:06.243293 2082 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-etc-cni-netd\") pod \"1678db66-9777-422c-b18b-75a7c2f9053f\" (UID: \"1678db66-9777-422c-b18b-75a7c2f9053f\") " Dec 13 04:55:06.243614 kubelet[2082]: I1213 04:55:06.243334 2082 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-host-proc-sys-net\") pod \"1678db66-9777-422c-b18b-75a7c2f9053f\" (UID: \"1678db66-9777-422c-b18b-75a7c2f9053f\") " Dec 13 04:55:06.243614 kubelet[2082]: I1213 04:55:06.243365 2082 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-lib-modules\") pod \"1678db66-9777-422c-b18b-75a7c2f9053f\" (UID: \"1678db66-9777-422c-b18b-75a7c2f9053f\") " Dec 13 04:55:06.243614 kubelet[2082]: I1213 04:55:06.243392 2082 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-host-proc-sys-kernel\") pod \"1678db66-9777-422c-b18b-75a7c2f9053f\" (UID: \"1678db66-9777-422c-b18b-75a7c2f9053f\") " Dec 13 04:55:06.243614 kubelet[2082]: I1213 04:55:06.243419 2082 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-cilium-cgroup\") pod \"1678db66-9777-422c-b18b-75a7c2f9053f\" (UID: \"1678db66-9777-422c-b18b-75a7c2f9053f\") " Dec 13 04:55:06.243886 kubelet[2082]: I1213 04:55:06.243535 2082 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1678db66-9777-422c-b18b-75a7c2f9053f" (UID: "1678db66-9777-422c-b18b-75a7c2f9053f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:55:06.243886 kubelet[2082]: I1213 04:55:06.243599 2082 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1678db66-9777-422c-b18b-75a7c2f9053f" (UID: "1678db66-9777-422c-b18b-75a7c2f9053f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:55:06.244517 kubelet[2082]: I1213 04:55:06.244067 2082 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-cni-path" (OuterVolumeSpecName: "cni-path") pod "1678db66-9777-422c-b18b-75a7c2f9053f" (UID: "1678db66-9777-422c-b18b-75a7c2f9053f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:55:06.244517 kubelet[2082]: I1213 04:55:06.244131 2082 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1678db66-9777-422c-b18b-75a7c2f9053f" (UID: "1678db66-9777-422c-b18b-75a7c2f9053f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:55:06.247829 kubelet[2082]: I1213 04:55:06.247796 2082 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1678db66-9777-422c-b18b-75a7c2f9053f" (UID: "1678db66-9777-422c-b18b-75a7c2f9053f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:55:06.247921 kubelet[2082]: I1213 04:55:06.247847 2082 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1678db66-9777-422c-b18b-75a7c2f9053f" (UID: "1678db66-9777-422c-b18b-75a7c2f9053f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:55:06.247921 kubelet[2082]: I1213 04:55:06.247879 2082 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1678db66-9777-422c-b18b-75a7c2f9053f" (UID: "1678db66-9777-422c-b18b-75a7c2f9053f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:55:06.247921 kubelet[2082]: I1213 04:55:06.247908 2082 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1678db66-9777-422c-b18b-75a7c2f9053f" (UID: "1678db66-9777-422c-b18b-75a7c2f9053f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:55:06.248164 kubelet[2082]: I1213 04:55:06.248136 2082 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1678db66-9777-422c-b18b-75a7c2f9053f" (UID: "1678db66-9777-422c-b18b-75a7c2f9053f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:55:06.250062 kubelet[2082]: I1213 04:55:06.250032 2082 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1678db66-9777-422c-b18b-75a7c2f9053f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1678db66-9777-422c-b18b-75a7c2f9053f" (UID: "1678db66-9777-422c-b18b-75a7c2f9053f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 04:55:06.250788 kubelet[2082]: I1213 04:55:06.250185 2082 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1678db66-9777-422c-b18b-75a7c2f9053f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1678db66-9777-422c-b18b-75a7c2f9053f" (UID: "1678db66-9777-422c-b18b-75a7c2f9053f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:55:06.250788 kubelet[2082]: I1213 04:55:06.250259 2082 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-hostproc" (OuterVolumeSpecName: "hostproc") pod "1678db66-9777-422c-b18b-75a7c2f9053f" (UID: "1678db66-9777-422c-b18b-75a7c2f9053f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:55:06.251096 kubelet[2082]: I1213 04:55:06.250998 2082 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1678db66-9777-422c-b18b-75a7c2f9053f-kube-api-access-2vx5q" (OuterVolumeSpecName: "kube-api-access-2vx5q") pod "1678db66-9777-422c-b18b-75a7c2f9053f" (UID: "1678db66-9777-422c-b18b-75a7c2f9053f"). InnerVolumeSpecName "kube-api-access-2vx5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:55:06.252976 kubelet[2082]: I1213 04:55:06.252886 2082 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1678db66-9777-422c-b18b-75a7c2f9053f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1678db66-9777-422c-b18b-75a7c2f9053f" (UID: "1678db66-9777-422c-b18b-75a7c2f9053f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:55:06.272272 kubelet[2082]: E1213 04:55:06.272163 2082 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 04:55:06.344296 kubelet[2082]: I1213 04:55:06.344197 2082 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-lib-modules\") on node \"10.244.15.10\" DevicePath \"\"" Dec 13 04:55:06.344296 kubelet[2082]: I1213 04:55:06.344286 2082 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-host-proc-sys-kernel\") on node \"10.244.15.10\" DevicePath \"\"" Dec 13 04:55:06.344296 kubelet[2082]: I1213 04:55:06.344309 2082 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-cilium-cgroup\") on node \"10.244.15.10\" DevicePath \"\"" Dec 13 04:55:06.344653 kubelet[2082]: I1213 04:55:06.344349 2082 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-cni-path\") on node \"10.244.15.10\" DevicePath \"\"" Dec 13 04:55:06.344653 kubelet[2082]: I1213 04:55:06.344366 2082 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-cilium-run\") on node \"10.244.15.10\" DevicePath \"\"" Dec 13 04:55:06.344653 kubelet[2082]: I1213 04:55:06.344382 2082 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1678db66-9777-422c-b18b-75a7c2f9053f-clustermesh-secrets\") on node \"10.244.15.10\" DevicePath \"\"" Dec 13 04:55:06.344653 kubelet[2082]: I1213 04:55:06.344398 2082 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2vx5q\" (UniqueName: \"kubernetes.io/projected/1678db66-9777-422c-b18b-75a7c2f9053f-kube-api-access-2vx5q\") on node \"10.244.15.10\" DevicePath \"\"" Dec 13 04:55:06.344653 kubelet[2082]: I1213 04:55:06.344415 2082 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-xtables-lock\") on node \"10.244.15.10\" DevicePath \"\"" Dec 13 04:55:06.344653 kubelet[2082]: I1213 04:55:06.344509 2082 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-hostproc\") on node \"10.244.15.10\" DevicePath \"\"" Dec 13 04:55:06.344653 kubelet[2082]: I1213 04:55:06.344542 2082 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-bpf-maps\") on node \"10.244.15.10\" DevicePath \"\"" Dec 13 04:55:06.344653 kubelet[2082]: I1213 04:55:06.344562 2082 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1678db66-9777-422c-b18b-75a7c2f9053f-cilium-config-path\") on node \"10.244.15.10\" DevicePath \"\"" Dec 13 04:55:06.345055 kubelet[2082]: I1213 04:55:06.344577 2082 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-etc-cni-netd\") on node \"10.244.15.10\" DevicePath \"\"" Dec 13 04:55:06.345055 kubelet[2082]: I1213 04:55:06.344594 2082 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1678db66-9777-422c-b18b-75a7c2f9053f-host-proc-sys-net\") on node \"10.244.15.10\" DevicePath \"\"" Dec 13 04:55:06.345055 kubelet[2082]: I1213 04:55:06.344609 2082 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1678db66-9777-422c-b18b-75a7c2f9053f-hubble-tls\") on node \"10.244.15.10\" DevicePath \"\"" Dec 13 04:55:06.548859 kubelet[2082]: I1213 04:55:06.548690 2082 scope.go:117] "RemoveContainer" containerID="9f2cd45fc79156d9a9bdf6ce7e23c39f941c7dd12784ff85fac62900073664d6" Dec 13 04:55:06.552549 containerd[1621]: time="2024-12-13T04:55:06.552103623Z" level=info msg="RemoveContainer for \"9f2cd45fc79156d9a9bdf6ce7e23c39f941c7dd12784ff85fac62900073664d6\"" Dec 13 04:55:06.558106 containerd[1621]: time="2024-12-13T04:55:06.558066230Z" level=info msg="RemoveContainer for \"9f2cd45fc79156d9a9bdf6ce7e23c39f941c7dd12784ff85fac62900073664d6\" returns successfully" Dec 13 04:55:06.558623 kubelet[2082]: I1213 04:55:06.558501 2082 scope.go:117] "RemoveContainer" containerID="89a1fb6f6559a45e2e9a1ccd1a6c92b8142f3c4146ff86d949bf88f35397f05c" Dec 13 04:55:06.561390 containerd[1621]: time="2024-12-13T04:55:06.561148670Z" level=info msg="RemoveContainer for \"89a1fb6f6559a45e2e9a1ccd1a6c92b8142f3c4146ff86d949bf88f35397f05c\"" Dec 13 04:55:06.569573 containerd[1621]: time="2024-12-13T04:55:06.569297633Z" level=info msg="RemoveContainer for \"89a1fb6f6559a45e2e9a1ccd1a6c92b8142f3c4146ff86d949bf88f35397f05c\" returns successfully" Dec 13 04:55:06.569671 kubelet[2082]: I1213 04:55:06.569482 2082 scope.go:117] "RemoveContainer" containerID="9465fd4ceba87953955a02a2fd3dd499e0e8c63b823a1122e5a34ed895600df5" Dec 13 04:55:06.571001 containerd[1621]: time="2024-12-13T04:55:06.570941305Z" level=info msg="RemoveContainer for \"9465fd4ceba87953955a02a2fd3dd499e0e8c63b823a1122e5a34ed895600df5\"" Dec 13 04:55:06.574049 containerd[1621]: time="2024-12-13T04:55:06.573964696Z" level=info msg="RemoveContainer for \"9465fd4ceba87953955a02a2fd3dd499e0e8c63b823a1122e5a34ed895600df5\" returns successfully" Dec 13 04:55:06.574352 kubelet[2082]: I1213 04:55:06.574205 2082 scope.go:117] "RemoveContainer" containerID="8723165c831443dbfb15539d4b094b6575aa221f8b23bceafe2b07762630627f" Dec 13 04:55:06.575576 containerd[1621]: time="2024-12-13T04:55:06.575529637Z" level=info msg="RemoveContainer for \"8723165c831443dbfb15539d4b094b6575aa221f8b23bceafe2b07762630627f\"" Dec 13 04:55:06.578155 containerd[1621]: time="2024-12-13T04:55:06.578123257Z" level=info msg="RemoveContainer for \"8723165c831443dbfb15539d4b094b6575aa221f8b23bceafe2b07762630627f\" returns successfully" Dec 13 04:55:06.578513 kubelet[2082]: I1213 04:55:06.578417 2082 scope.go:117] "RemoveContainer" containerID="d13a3e6110d6d05c22719e145a76dcd9819041d5d3eec7ced0988245e31f1b0b" Dec 13 04:55:06.579802 containerd[1621]: time="2024-12-13T04:55:06.579772329Z" level=info msg="RemoveContainer for \"d13a3e6110d6d05c22719e145a76dcd9819041d5d3eec7ced0988245e31f1b0b\"" Dec 13 04:55:06.582648 containerd[1621]: time="2024-12-13T04:55:06.582550839Z" level=info msg="RemoveContainer for \"d13a3e6110d6d05c22719e145a76dcd9819041d5d3eec7ced0988245e31f1b0b\" returns successfully" Dec 13 04:55:06.582749 kubelet[2082]: I1213 04:55:06.582726 2082 scope.go:117] "RemoveContainer" containerID="9f2cd45fc79156d9a9bdf6ce7e23c39f941c7dd12784ff85fac62900073664d6" Dec 13 04:55:06.587270 containerd[1621]: time="2024-12-13T04:55:06.587202898Z" level=error msg="ContainerStatus for \"9f2cd45fc79156d9a9bdf6ce7e23c39f941c7dd12784ff85fac62900073664d6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9f2cd45fc79156d9a9bdf6ce7e23c39f941c7dd12784ff85fac62900073664d6\": not found" Dec 13 04:55:06.599347 kubelet[2082]: E1213 04:55:06.599293 2082 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9f2cd45fc79156d9a9bdf6ce7e23c39f941c7dd12784ff85fac62900073664d6\": not found" containerID="9f2cd45fc79156d9a9bdf6ce7e23c39f941c7dd12784ff85fac62900073664d6" Dec 13 04:55:06.599509 kubelet[2082]: I1213 04:55:06.599447 2082 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9f2cd45fc79156d9a9bdf6ce7e23c39f941c7dd12784ff85fac62900073664d6"} err="failed to get container status \"9f2cd45fc79156d9a9bdf6ce7e23c39f941c7dd12784ff85fac62900073664d6\": rpc error: code = NotFound desc = an error occurred when try to find container \"9f2cd45fc79156d9a9bdf6ce7e23c39f941c7dd12784ff85fac62900073664d6\": not found" Dec 13 04:55:06.599509 kubelet[2082]: I1213 04:55:06.599476 2082 scope.go:117] "RemoveContainer" containerID="89a1fb6f6559a45e2e9a1ccd1a6c92b8142f3c4146ff86d949bf88f35397f05c" Dec 13 04:55:06.600212 containerd[1621]: time="2024-12-13T04:55:06.599833865Z" level=error msg="ContainerStatus for \"89a1fb6f6559a45e2e9a1ccd1a6c92b8142f3c4146ff86d949bf88f35397f05c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"89a1fb6f6559a45e2e9a1ccd1a6c92b8142f3c4146ff86d949bf88f35397f05c\": not found" Dec 13 04:55:06.600296 kubelet[2082]: E1213 04:55:06.600070 2082 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"89a1fb6f6559a45e2e9a1ccd1a6c92b8142f3c4146ff86d949bf88f35397f05c\": not found" containerID="89a1fb6f6559a45e2e9a1ccd1a6c92b8142f3c4146ff86d949bf88f35397f05c" Dec 13 04:55:06.600296 kubelet[2082]: I1213 04:55:06.600108 2082 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"89a1fb6f6559a45e2e9a1ccd1a6c92b8142f3c4146ff86d949bf88f35397f05c"} err="failed to get container status \"89a1fb6f6559a45e2e9a1ccd1a6c92b8142f3c4146ff86d949bf88f35397f05c\": rpc error: code = NotFound desc = an error occurred when try to find container \"89a1fb6f6559a45e2e9a1ccd1a6c92b8142f3c4146ff86d949bf88f35397f05c\": not found" Dec 13 04:55:06.600296 kubelet[2082]: I1213 04:55:06.600127 2082 scope.go:117] "RemoveContainer" containerID="9465fd4ceba87953955a02a2fd3dd499e0e8c63b823a1122e5a34ed895600df5" Dec 13 04:55:06.600781 containerd[1621]: time="2024-12-13T04:55:06.600687419Z" level=error msg="ContainerStatus for \"9465fd4ceba87953955a02a2fd3dd499e0e8c63b823a1122e5a34ed895600df5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9465fd4ceba87953955a02a2fd3dd499e0e8c63b823a1122e5a34ed895600df5\": not found" Dec 13 04:55:06.600914 kubelet[2082]: E1213 04:55:06.600863 2082 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9465fd4ceba87953955a02a2fd3dd499e0e8c63b823a1122e5a34ed895600df5\": not found" containerID="9465fd4ceba87953955a02a2fd3dd499e0e8c63b823a1122e5a34ed895600df5" Dec 13 04:55:06.600986 kubelet[2082]: I1213 04:55:06.600940 2082 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9465fd4ceba87953955a02a2fd3dd499e0e8c63b823a1122e5a34ed895600df5"} err="failed to get container status \"9465fd4ceba87953955a02a2fd3dd499e0e8c63b823a1122e5a34ed895600df5\": rpc error: code = NotFound desc = an error occurred when try to find container \"9465fd4ceba87953955a02a2fd3dd499e0e8c63b823a1122e5a34ed895600df5\": not found" Dec 13 04:55:06.601082 kubelet[2082]: I1213 04:55:06.600993 2082 scope.go:117] "RemoveContainer" containerID="8723165c831443dbfb15539d4b094b6575aa221f8b23bceafe2b07762630627f" Dec 13 04:55:06.601489 containerd[1621]: time="2024-12-13T04:55:06.601353725Z" level=error msg="ContainerStatus for \"8723165c831443dbfb15539d4b094b6575aa221f8b23bceafe2b07762630627f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8723165c831443dbfb15539d4b094b6575aa221f8b23bceafe2b07762630627f\": not found" Dec 13 04:55:06.601575 kubelet[2082]: E1213 04:55:06.601556 2082 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8723165c831443dbfb15539d4b094b6575aa221f8b23bceafe2b07762630627f\": not found" containerID="8723165c831443dbfb15539d4b094b6575aa221f8b23bceafe2b07762630627f" Dec 13 04:55:06.601640 kubelet[2082]: I1213 04:55:06.601588 2082 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8723165c831443dbfb15539d4b094b6575aa221f8b23bceafe2b07762630627f"} err="failed to get container status \"8723165c831443dbfb15539d4b094b6575aa221f8b23bceafe2b07762630627f\": rpc error: code = NotFound desc = an error occurred when try to find container \"8723165c831443dbfb15539d4b094b6575aa221f8b23bceafe2b07762630627f\": not found" Dec 13 04:55:06.601640 kubelet[2082]: I1213 04:55:06.601604 2082 scope.go:117] "RemoveContainer" containerID="d13a3e6110d6d05c22719e145a76dcd9819041d5d3eec7ced0988245e31f1b0b" Dec 13 04:55:06.601964 containerd[1621]: time="2024-12-13T04:55:06.601784296Z" level=error msg="ContainerStatus for \"d13a3e6110d6d05c22719e145a76dcd9819041d5d3eec7ced0988245e31f1b0b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d13a3e6110d6d05c22719e145a76dcd9819041d5d3eec7ced0988245e31f1b0b\": not found" Dec 13 04:55:06.602056 kubelet[2082]: E1213 04:55:06.601987 2082 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d13a3e6110d6d05c22719e145a76dcd9819041d5d3eec7ced0988245e31f1b0b\": not found" containerID="d13a3e6110d6d05c22719e145a76dcd9819041d5d3eec7ced0988245e31f1b0b" Dec 13 04:55:06.602056 kubelet[2082]: I1213 04:55:06.602043 2082 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d13a3e6110d6d05c22719e145a76dcd9819041d5d3eec7ced0988245e31f1b0b"} err="failed to get container status \"d13a3e6110d6d05c22719e145a76dcd9819041d5d3eec7ced0988245e31f1b0b\": rpc error: code = NotFound desc = an error occurred when try to find container \"d13a3e6110d6d05c22719e145a76dcd9819041d5d3eec7ced0988245e31f1b0b\": not found" Dec 13 04:55:06.670426 systemd[1]: var-lib-kubelet-pods-1678db66\x2d9777\x2d422c\x2db18b\x2d75a7c2f9053f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2vx5q.mount: Deactivated successfully. Dec 13 04:55:06.670637 systemd[1]: var-lib-kubelet-pods-1678db66\x2d9777\x2d422c\x2db18b\x2d75a7c2f9053f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 04:55:06.670810 systemd[1]: var-lib-kubelet-pods-1678db66\x2d9777\x2d422c\x2db18b\x2d75a7c2f9053f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 04:55:07.198550 kubelet[2082]: E1213 04:55:07.198458 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:07.286071 kubelet[2082]: I1213 04:55:07.285681 2082 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1678db66-9777-422c-b18b-75a7c2f9053f" path="/var/lib/kubelet/pods/1678db66-9777-422c-b18b-75a7c2f9053f/volumes" Dec 13 04:55:08.199304 kubelet[2082]: E1213 04:55:08.199208 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:09.200346 kubelet[2082]: E1213 04:55:09.200258 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:10.190868 kubelet[2082]: I1213 04:55:10.190812 2082 topology_manager.go:215] "Topology Admit Handler" podUID="e8e1a625-3f2f-449a-916c-87e06082b754" podNamespace="kube-system" podName="cilium-operator-5cc964979-hjph5" Dec 13 04:55:10.192031 kubelet[2082]: E1213 04:55:10.191214 2082 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1678db66-9777-422c-b18b-75a7c2f9053f" containerName="mount-bpf-fs" Dec 13 04:55:10.192031 kubelet[2082]: E1213 04:55:10.191247 2082 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1678db66-9777-422c-b18b-75a7c2f9053f" containerName="clean-cilium-state" Dec 13 04:55:10.192031 kubelet[2082]: E1213 04:55:10.191262 2082 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1678db66-9777-422c-b18b-75a7c2f9053f" containerName="cilium-agent" Dec 13 04:55:10.192031 kubelet[2082]: E1213 04:55:10.191274 2082 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1678db66-9777-422c-b18b-75a7c2f9053f" containerName="mount-cgroup" Dec 13 04:55:10.192031 kubelet[2082]: E1213 04:55:10.191285 2082 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1678db66-9777-422c-b18b-75a7c2f9053f" containerName="apply-sysctl-overwrites" Dec 13 04:55:10.192031 kubelet[2082]: I1213 04:55:10.191332 2082 memory_manager.go:354] "RemoveStaleState removing state" podUID="1678db66-9777-422c-b18b-75a7c2f9053f" containerName="cilium-agent" Dec 13 04:55:10.200471 kubelet[2082]: E1213 04:55:10.200416 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:10.218812 kubelet[2082]: I1213 04:55:10.218743 2082 topology_manager.go:215] "Topology Admit Handler" podUID="4674357f-1214-4ac2-a0d8-45bd84172db5" podNamespace="kube-system" podName="cilium-hnwtp" Dec 13 04:55:10.271963 kubelet[2082]: I1213 04:55:10.271900 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8e1a625-3f2f-449a-916c-87e06082b754-cilium-config-path\") pod \"cilium-operator-5cc964979-hjph5\" (UID: \"e8e1a625-3f2f-449a-916c-87e06082b754\") " pod="kube-system/cilium-operator-5cc964979-hjph5" Dec 13 04:55:10.271963 kubelet[2082]: I1213 04:55:10.271974 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg5gb\" (UniqueName: \"kubernetes.io/projected/e8e1a625-3f2f-449a-916c-87e06082b754-kube-api-access-tg5gb\") pod \"cilium-operator-5cc964979-hjph5\" (UID: \"e8e1a625-3f2f-449a-916c-87e06082b754\") " pod="kube-system/cilium-operator-5cc964979-hjph5" Dec 13 04:55:10.372954 kubelet[2082]: I1213 04:55:10.372815 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4674357f-1214-4ac2-a0d8-45bd84172db5-cni-path\") pod \"cilium-hnwtp\" (UID: \"4674357f-1214-4ac2-a0d8-45bd84172db5\") " pod="kube-system/cilium-hnwtp" Dec 13 04:55:10.373632 kubelet[2082]: I1213 04:55:10.373046 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4674357f-1214-4ac2-a0d8-45bd84172db5-host-proc-sys-net\") pod \"cilium-hnwtp\" (UID: \"4674357f-1214-4ac2-a0d8-45bd84172db5\") " pod="kube-system/cilium-hnwtp" Dec 13 04:55:10.373632 kubelet[2082]: I1213 04:55:10.373126 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4674357f-1214-4ac2-a0d8-45bd84172db5-xtables-lock\") pod \"cilium-hnwtp\" (UID: \"4674357f-1214-4ac2-a0d8-45bd84172db5\") " pod="kube-system/cilium-hnwtp" Dec 13 04:55:10.373632 kubelet[2082]: I1213 04:55:10.373263 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4674357f-1214-4ac2-a0d8-45bd84172db5-hostproc\") pod \"cilium-hnwtp\" (UID: \"4674357f-1214-4ac2-a0d8-45bd84172db5\") " pod="kube-system/cilium-hnwtp" Dec 13 04:55:10.373632 kubelet[2082]: I1213 04:55:10.373305 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4674357f-1214-4ac2-a0d8-45bd84172db5-cilium-cgroup\") pod \"cilium-hnwtp\" (UID: \"4674357f-1214-4ac2-a0d8-45bd84172db5\") " pod="kube-system/cilium-hnwtp" Dec 13 04:55:10.373632 kubelet[2082]: I1213 04:55:10.373337 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4674357f-1214-4ac2-a0d8-45bd84172db5-cilium-ipsec-secrets\") pod \"cilium-hnwtp\" (UID: \"4674357f-1214-4ac2-a0d8-45bd84172db5\") " pod="kube-system/cilium-hnwtp" Dec 13 04:55:10.373632 kubelet[2082]: I1213 04:55:10.373366 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4674357f-1214-4ac2-a0d8-45bd84172db5-clustermesh-secrets\") pod \"cilium-hnwtp\" (UID: \"4674357f-1214-4ac2-a0d8-45bd84172db5\") " pod="kube-system/cilium-hnwtp" Dec 13 04:55:10.374035 kubelet[2082]: I1213 04:55:10.373453 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4674357f-1214-4ac2-a0d8-45bd84172db5-host-proc-sys-kernel\") pod \"cilium-hnwtp\" (UID: \"4674357f-1214-4ac2-a0d8-45bd84172db5\") " pod="kube-system/cilium-hnwtp" Dec 13 04:55:10.374035 kubelet[2082]: I1213 04:55:10.373602 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4674357f-1214-4ac2-a0d8-45bd84172db5-etc-cni-netd\") pod \"cilium-hnwtp\" (UID: \"4674357f-1214-4ac2-a0d8-45bd84172db5\") " pod="kube-system/cilium-hnwtp" Dec 13 04:55:10.374035 kubelet[2082]: I1213 04:55:10.373663 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4674357f-1214-4ac2-a0d8-45bd84172db5-cilium-config-path\") pod \"cilium-hnwtp\" (UID: \"4674357f-1214-4ac2-a0d8-45bd84172db5\") " pod="kube-system/cilium-hnwtp" Dec 13 04:55:10.374035 kubelet[2082]: I1213 04:55:10.373700 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4674357f-1214-4ac2-a0d8-45bd84172db5-cilium-run\") pod \"cilium-hnwtp\" (UID: \"4674357f-1214-4ac2-a0d8-45bd84172db5\") " pod="kube-system/cilium-hnwtp" Dec 13 04:55:10.374035 kubelet[2082]: I1213 04:55:10.373732 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4674357f-1214-4ac2-a0d8-45bd84172db5-bpf-maps\") pod \"cilium-hnwtp\" (UID: \"4674357f-1214-4ac2-a0d8-45bd84172db5\") " pod="kube-system/cilium-hnwtp" Dec 13 04:55:10.374035 kubelet[2082]: I1213 04:55:10.373774 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4674357f-1214-4ac2-a0d8-45bd84172db5-lib-modules\") pod \"cilium-hnwtp\" (UID: \"4674357f-1214-4ac2-a0d8-45bd84172db5\") " pod="kube-system/cilium-hnwtp" Dec 13 04:55:10.374371 kubelet[2082]: I1213 04:55:10.373813 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4674357f-1214-4ac2-a0d8-45bd84172db5-hubble-tls\") pod \"cilium-hnwtp\" (UID: \"4674357f-1214-4ac2-a0d8-45bd84172db5\") " pod="kube-system/cilium-hnwtp" Dec 13 04:55:10.374371 kubelet[2082]: I1213 04:55:10.373850 2082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmpzk\" (UniqueName: \"kubernetes.io/projected/4674357f-1214-4ac2-a0d8-45bd84172db5-kube-api-access-wmpzk\") pod \"cilium-hnwtp\" (UID: \"4674357f-1214-4ac2-a0d8-45bd84172db5\") " pod="kube-system/cilium-hnwtp" Dec 13 04:55:10.503875 containerd[1621]: time="2024-12-13T04:55:10.503350555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-hjph5,Uid:e8e1a625-3f2f-449a-916c-87e06082b754,Namespace:kube-system,Attempt:0,}" Dec 13 04:55:10.537865 containerd[1621]: time="2024-12-13T04:55:10.537800168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hnwtp,Uid:4674357f-1214-4ac2-a0d8-45bd84172db5,Namespace:kube-system,Attempt:0,}" Dec 13 04:55:10.563580 containerd[1621]: time="2024-12-13T04:55:10.562375703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:55:10.563580 containerd[1621]: time="2024-12-13T04:55:10.562463801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:55:10.563580 containerd[1621]: time="2024-12-13T04:55:10.562508777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:55:10.563892 containerd[1621]: time="2024-12-13T04:55:10.562989340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:55:10.569723 containerd[1621]: time="2024-12-13T04:55:10.569630085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:55:10.570207 containerd[1621]: time="2024-12-13T04:55:10.569914569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:55:10.570207 containerd[1621]: time="2024-12-13T04:55:10.569955607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:55:10.570207 containerd[1621]: time="2024-12-13T04:55:10.570098909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:55:10.626727 containerd[1621]: time="2024-12-13T04:55:10.626650043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hnwtp,Uid:4674357f-1214-4ac2-a0d8-45bd84172db5,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae3feb1c1d534d39bf20f7026a393f96d151dea76005c3c4f6918a3c2dfc9850\"" Dec 13 04:55:10.649888 containerd[1621]: time="2024-12-13T04:55:10.649833352Z" level=info msg="CreateContainer within sandbox \"ae3feb1c1d534d39bf20f7026a393f96d151dea76005c3c4f6918a3c2dfc9850\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 04:55:10.668203 containerd[1621]: time="2024-12-13T04:55:10.668140201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-hjph5,Uid:e8e1a625-3f2f-449a-916c-87e06082b754,Namespace:kube-system,Attempt:0,} returns sandbox id \"df020bcd2b5bea137267fb52fdc07cd42aa793228b8ad7793856cefd263415b9\"" Dec 13 04:55:10.670578 containerd[1621]: time="2024-12-13T04:55:10.670276262Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 04:55:10.694641 containerd[1621]: time="2024-12-13T04:55:10.694553171Z" level=info msg="CreateContainer within sandbox \"ae3feb1c1d534d39bf20f7026a393f96d151dea76005c3c4f6918a3c2dfc9850\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6b091500e3a0b4ccd712d902d913a809c0899a3186614de28fbde43a5155fe27\"" Dec 13 04:55:10.695774 containerd[1621]: time="2024-12-13T04:55:10.695695404Z" level=info msg="StartContainer for \"6b091500e3a0b4ccd712d902d913a809c0899a3186614de28fbde43a5155fe27\"" Dec 13 04:55:10.772454 containerd[1621]: time="2024-12-13T04:55:10.771489681Z" level=info msg="StartContainer for \"6b091500e3a0b4ccd712d902d913a809c0899a3186614de28fbde43a5155fe27\" returns successfully" Dec 13 04:55:10.825366 containerd[1621]: time="2024-12-13T04:55:10.825274118Z" level=info msg="shim disconnected" id=6b091500e3a0b4ccd712d902d913a809c0899a3186614de28fbde43a5155fe27 namespace=k8s.io Dec 13 04:55:10.825366 containerd[1621]: time="2024-12-13T04:55:10.825360914Z" level=warning msg="cleaning up after shim disconnected" id=6b091500e3a0b4ccd712d902d913a809c0899a3186614de28fbde43a5155fe27 namespace=k8s.io Dec 13 04:55:10.825366 containerd[1621]: time="2024-12-13T04:55:10.825377492Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 04:55:11.130425 kubelet[2082]: E1213 04:55:11.129958 2082 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:11.201538 kubelet[2082]: E1213 04:55:11.201459 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:11.273660 kubelet[2082]: E1213 04:55:11.273620 2082 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 04:55:11.566624 containerd[1621]: time="2024-12-13T04:55:11.566555021Z" level=info msg="CreateContainer within sandbox \"ae3feb1c1d534d39bf20f7026a393f96d151dea76005c3c4f6918a3c2dfc9850\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 04:55:11.580762 containerd[1621]: time="2024-12-13T04:55:11.580709919Z" level=info msg="CreateContainer within sandbox \"ae3feb1c1d534d39bf20f7026a393f96d151dea76005c3c4f6918a3c2dfc9850\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"de7e272749c4b80266de6b41d531f8ecc7219dfa1ff85e96b3e449f68b2ef1d7\"" Dec 13 04:55:11.582093 containerd[1621]: time="2024-12-13T04:55:11.581795849Z" level=info msg="StartContainer for \"de7e272749c4b80266de6b41d531f8ecc7219dfa1ff85e96b3e449f68b2ef1d7\"" Dec 13 04:55:11.654729 containerd[1621]: time="2024-12-13T04:55:11.654668070Z" level=info msg="StartContainer for \"de7e272749c4b80266de6b41d531f8ecc7219dfa1ff85e96b3e449f68b2ef1d7\" returns successfully" Dec 13 04:55:11.694156 containerd[1621]: time="2024-12-13T04:55:11.693833264Z" level=info msg="shim disconnected" id=de7e272749c4b80266de6b41d531f8ecc7219dfa1ff85e96b3e449f68b2ef1d7 namespace=k8s.io Dec 13 04:55:11.694156 containerd[1621]: time="2024-12-13T04:55:11.693914098Z" level=warning msg="cleaning up after shim disconnected" id=de7e272749c4b80266de6b41d531f8ecc7219dfa1ff85e96b3e449f68b2ef1d7 namespace=k8s.io Dec 13 04:55:11.694156 containerd[1621]: time="2024-12-13T04:55:11.693940996Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 04:55:12.202380 kubelet[2082]: E1213 04:55:12.202288 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:12.404726 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de7e272749c4b80266de6b41d531f8ecc7219dfa1ff85e96b3e449f68b2ef1d7-rootfs.mount: Deactivated successfully. Dec 13 04:55:12.574685 containerd[1621]: time="2024-12-13T04:55:12.574510616Z" level=info msg="CreateContainer within sandbox \"ae3feb1c1d534d39bf20f7026a393f96d151dea76005c3c4f6918a3c2dfc9850\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 04:55:12.593758 containerd[1621]: time="2024-12-13T04:55:12.593695304Z" level=info msg="CreateContainer within sandbox \"ae3feb1c1d534d39bf20f7026a393f96d151dea76005c3c4f6918a3c2dfc9850\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e2751c01227d27911773e08f710de148d6ac594577eed01656380afa1cd7e4c6\"" Dec 13 04:55:12.595545 containerd[1621]: time="2024-12-13T04:55:12.595045446Z" level=info msg="StartContainer for \"e2751c01227d27911773e08f710de148d6ac594577eed01656380afa1cd7e4c6\"" Dec 13 04:55:12.676361 containerd[1621]: time="2024-12-13T04:55:12.676303659Z" level=info msg="StartContainer for \"e2751c01227d27911773e08f710de148d6ac594577eed01656380afa1cd7e4c6\" returns successfully" Dec 13 04:55:12.712925 containerd[1621]: time="2024-12-13T04:55:12.712846055Z" level=info msg="shim disconnected" id=e2751c01227d27911773e08f710de148d6ac594577eed01656380afa1cd7e4c6 namespace=k8s.io Dec 13 04:55:12.712925 containerd[1621]: time="2024-12-13T04:55:12.712921857Z" level=warning msg="cleaning up after shim disconnected" id=e2751c01227d27911773e08f710de148d6ac594577eed01656380afa1cd7e4c6 namespace=k8s.io Dec 13 04:55:12.712925 containerd[1621]: time="2024-12-13T04:55:12.712937808Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 04:55:12.721400 kubelet[2082]: I1213 04:55:12.721360 2082 setters.go:568] "Node became not ready" node="10.244.15.10" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T04:55:12Z","lastTransitionTime":"2024-12-13T04:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 04:55:13.203214 kubelet[2082]: E1213 04:55:13.203134 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:13.407488 systemd[1]: run-containerd-runc-k8s.io-e2751c01227d27911773e08f710de148d6ac594577eed01656380afa1cd7e4c6-runc.Q4sLvd.mount: Deactivated successfully. Dec 13 04:55:13.408236 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2751c01227d27911773e08f710de148d6ac594577eed01656380afa1cd7e4c6-rootfs.mount: Deactivated successfully. Dec 13 04:55:13.580711 containerd[1621]: time="2024-12-13T04:55:13.580289541Z" level=info msg="CreateContainer within sandbox \"ae3feb1c1d534d39bf20f7026a393f96d151dea76005c3c4f6918a3c2dfc9850\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 04:55:13.606721 containerd[1621]: time="2024-12-13T04:55:13.606188535Z" level=info msg="CreateContainer within sandbox \"ae3feb1c1d534d39bf20f7026a393f96d151dea76005c3c4f6918a3c2dfc9850\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"13910127d82713f6ee1fbfe7c7e4ad191241361656b8b59df7d5ef6a2ef94eff\"" Dec 13 04:55:13.607897 containerd[1621]: time="2024-12-13T04:55:13.607061855Z" level=info msg="StartContainer for \"13910127d82713f6ee1fbfe7c7e4ad191241361656b8b59df7d5ef6a2ef94eff\"" Dec 13 04:55:13.728748 containerd[1621]: time="2024-12-13T04:55:13.728688156Z" level=info msg="StartContainer for \"13910127d82713f6ee1fbfe7c7e4ad191241361656b8b59df7d5ef6a2ef94eff\" returns successfully" Dec 13 04:55:13.892036 containerd[1621]: time="2024-12-13T04:55:13.891784101Z" level=info msg="shim disconnected" id=13910127d82713f6ee1fbfe7c7e4ad191241361656b8b59df7d5ef6a2ef94eff namespace=k8s.io Dec 13 04:55:13.892561 containerd[1621]: time="2024-12-13T04:55:13.892318559Z" level=warning msg="cleaning up after shim disconnected" id=13910127d82713f6ee1fbfe7c7e4ad191241361656b8b59df7d5ef6a2ef94eff namespace=k8s.io Dec 13 04:55:13.892561 containerd[1621]: time="2024-12-13T04:55:13.892348557Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 04:55:13.932088 containerd[1621]: time="2024-12-13T04:55:13.931975760Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:55:13.933938 containerd[1621]: time="2024-12-13T04:55:13.933877382Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907217" Dec 13 04:55:13.935174 containerd[1621]: time="2024-12-13T04:55:13.935138055Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:55:13.938071 containerd[1621]: time="2024-12-13T04:55:13.938016971Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.267693581s" Dec 13 04:55:13.938177 containerd[1621]: time="2024-12-13T04:55:13.938116487Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 04:55:13.941298 containerd[1621]: time="2024-12-13T04:55:13.941258180Z" level=info msg="CreateContainer within sandbox \"df020bcd2b5bea137267fb52fdc07cd42aa793228b8ad7793856cefd263415b9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 04:55:13.952927 containerd[1621]: time="2024-12-13T04:55:13.952839987Z" level=info msg="CreateContainer within sandbox \"df020bcd2b5bea137267fb52fdc07cd42aa793228b8ad7793856cefd263415b9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"08d7d03ce75a0ef81b2b06872c83ba9d2d6dde95402d5708951cadb688b679f4\"" Dec 13 04:55:13.953806 containerd[1621]: time="2024-12-13T04:55:13.953772134Z" level=info msg="StartContainer for \"08d7d03ce75a0ef81b2b06872c83ba9d2d6dde95402d5708951cadb688b679f4\"" Dec 13 04:55:14.021704 containerd[1621]: time="2024-12-13T04:55:14.021647052Z" level=info msg="StartContainer for \"08d7d03ce75a0ef81b2b06872c83ba9d2d6dde95402d5708951cadb688b679f4\" returns successfully" Dec 13 04:55:14.203358 kubelet[2082]: E1213 04:55:14.203275 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:14.408114 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13910127d82713f6ee1fbfe7c7e4ad191241361656b8b59df7d5ef6a2ef94eff-rootfs.mount: Deactivated successfully. Dec 13 04:55:14.590384 containerd[1621]: time="2024-12-13T04:55:14.588309531Z" level=info msg="CreateContainer within sandbox \"ae3feb1c1d534d39bf20f7026a393f96d151dea76005c3c4f6918a3c2dfc9850\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 04:55:14.607830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2143509542.mount: Deactivated successfully. Dec 13 04:55:14.610979 containerd[1621]: time="2024-12-13T04:55:14.610704799Z" level=info msg="CreateContainer within sandbox \"ae3feb1c1d534d39bf20f7026a393f96d151dea76005c3c4f6918a3c2dfc9850\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a0735cbb8a8aeb114ca4dbd7dcf972e709e446e6501aab6e650f524015018ce5\"" Dec 13 04:55:14.612099 containerd[1621]: time="2024-12-13T04:55:14.611838912Z" level=info msg="StartContainer for \"a0735cbb8a8aeb114ca4dbd7dcf972e709e446e6501aab6e650f524015018ce5\"" Dec 13 04:55:14.701453 containerd[1621]: time="2024-12-13T04:55:14.701386588Z" level=info msg="StartContainer for \"a0735cbb8a8aeb114ca4dbd7dcf972e709e446e6501aab6e650f524015018ce5\" returns successfully" Dec 13 04:55:15.204290 kubelet[2082]: E1213 04:55:15.204232 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:15.371239 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 04:55:15.407862 systemd[1]: run-containerd-runc-k8s.io-a0735cbb8a8aeb114ca4dbd7dcf972e709e446e6501aab6e650f524015018ce5-runc.1RIC1f.mount: Deactivated successfully. Dec 13 04:55:15.634259 kubelet[2082]: I1213 04:55:15.632506 2082 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-hjph5" podStartSLOduration=2.363612848 podStartE2EDuration="5.632441014s" podCreationTimestamp="2024-12-13 04:55:10 +0000 UTC" firstStartedPulling="2024-12-13 04:55:10.669679605 +0000 UTC m=+80.258305346" lastFinishedPulling="2024-12-13 04:55:13.938507766 +0000 UTC m=+83.527133512" observedRunningTime="2024-12-13 04:55:14.645890185 +0000 UTC m=+84.234515925" watchObservedRunningTime="2024-12-13 04:55:15.632441014 +0000 UTC m=+85.221066761" Dec 13 04:55:16.209946 kubelet[2082]: E1213 04:55:16.205187 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:17.210496 kubelet[2082]: E1213 04:55:17.210394 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:18.210749 kubelet[2082]: E1213 04:55:18.210699 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:19.034974 systemd-networkd[1253]: lxc_health: Link UP Dec 13 04:55:19.045709 systemd-networkd[1253]: lxc_health: Gained carrier Dec 13 04:55:19.211028 kubelet[2082]: E1213 04:55:19.210935 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:20.212111 kubelet[2082]: E1213 04:55:20.212033 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:20.584144 kubelet[2082]: I1213 04:55:20.583408 2082 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-hnwtp" podStartSLOduration=10.583344841 podStartE2EDuration="10.583344841s" podCreationTimestamp="2024-12-13 04:55:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 04:55:15.634196172 +0000 UTC m=+85.222821925" watchObservedRunningTime="2024-12-13 04:55:20.583344841 +0000 UTC m=+90.171970589" Dec 13 04:55:20.631191 systemd-networkd[1253]: lxc_health: Gained IPv6LL Dec 13 04:55:21.212552 kubelet[2082]: E1213 04:55:21.212480 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:22.213600 kubelet[2082]: E1213 04:55:22.213533 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:23.215048 kubelet[2082]: E1213 04:55:23.214469 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:24.215108 kubelet[2082]: E1213 04:55:24.215000 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:25.215615 kubelet[2082]: E1213 04:55:25.215463 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:26.216674 kubelet[2082]: E1213 04:55:26.216595 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:27.217738 kubelet[2082]: E1213 04:55:27.217607 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:28.218335 kubelet[2082]: E1213 04:55:28.218255 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:55:29.219515 kubelet[2082]: E1213 04:55:29.219426 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"