Dec 13 05:55:55.024086 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 05:55:55.024133 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 05:55:55.024270 kernel: BIOS-provided physical RAM map: Dec 13 05:55:55.024288 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 05:55:55.024297 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 05:55:55.024325 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 05:55:55.024335 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 13 05:55:55.024345 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 13 05:55:55.024354 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 05:55:55.024363 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 05:55:55.024372 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 05:55:55.024381 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 05:55:55.024394 kernel: NX (Execute Disable) protection: active Dec 13 05:55:55.024403 kernel: APIC: Static calls initialized Dec 13 05:55:55.024414 kernel: SMBIOS 2.8 present. Dec 13 05:55:55.024424 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.16.0-3.module_el8.7.0+3346+68867adb 04/01/2014 Dec 13 05:55:55.024434 kernel: Hypervisor detected: KVM Dec 13 05:55:55.024448 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 05:55:55.024458 kernel: kvm-clock: using sched offset of 4266576333 cycles Dec 13 05:55:55.024469 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 05:55:55.024479 kernel: tsc: Detected 2799.998 MHz processor Dec 13 05:55:55.024489 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 05:55:55.024511 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 05:55:55.024522 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 13 05:55:55.024532 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 05:55:55.024543 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 05:55:55.024581 kernel: Using GB pages for direct mapping Dec 13 05:55:55.024592 kernel: ACPI: Early table checksum verification disabled Dec 13 05:55:55.024603 kernel: ACPI: RSDP 0x00000000000F59E0 000014 (v00 BOCHS ) Dec 13 05:55:55.024614 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:55:55.024625 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:55:55.024636 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:55:55.024647 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 13 05:55:55.024657 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:55:55.024668 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:55:55.024684 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:55:55.024695 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:55:55.024705 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 13 05:55:55.024716 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 13 05:55:55.024727 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 13 05:55:55.024744 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 13 05:55:55.024756 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 13 05:55:55.024771 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 13 05:55:55.024783 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 13 05:55:55.024794 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 05:55:55.024805 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 05:55:55.024817 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 05:55:55.024828 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Dec 13 05:55:55.024839 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 05:55:55.024872 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Dec 13 05:55:55.024883 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 05:55:55.024894 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Dec 13 05:55:55.024904 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 05:55:55.024930 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Dec 13 05:55:55.024941 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 05:55:55.024951 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Dec 13 05:55:55.024962 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 05:55:55.024972 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Dec 13 05:55:55.024983 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 05:55:55.025000 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Dec 13 05:55:55.025011 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 05:55:55.025022 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 05:55:55.025032 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 13 05:55:55.025055 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Dec 13 05:55:55.025066 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Dec 13 05:55:55.025077 kernel: Zone ranges: Dec 13 05:55:55.025088 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 05:55:55.025099 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 13 05:55:55.025126 kernel: Normal empty Dec 13 05:55:55.025137 kernel: Movable zone start for each node Dec 13 05:55:55.025147 kernel: Early memory node ranges Dec 13 05:55:55.025157 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 05:55:55.025196 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 13 05:55:55.025208 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 13 05:55:55.025218 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 05:55:55.025229 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 05:55:55.025240 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 13 05:55:55.025251 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 05:55:55.025267 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 05:55:55.025278 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 05:55:55.025289 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 05:55:55.025300 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 05:55:55.025310 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 05:55:55.025321 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 05:55:55.025331 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 05:55:55.025349 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 05:55:55.025367 kernel: TSC deadline timer available Dec 13 05:55:55.025398 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Dec 13 05:55:55.025416 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 05:55:55.025427 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 05:55:55.025438 kernel: Booting paravirtualized kernel on KVM Dec 13 05:55:55.025448 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 05:55:55.025459 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Dec 13 05:55:55.025470 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Dec 13 05:55:55.025481 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Dec 13 05:55:55.025504 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 05:55:55.025520 kernel: kvm-guest: PV spinlocks enabled Dec 13 05:55:55.025531 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 05:55:55.025544 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 05:55:55.025576 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 05:55:55.025588 kernel: random: crng init done Dec 13 05:55:55.025599 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 05:55:55.025611 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 05:55:55.025622 kernel: Fallback order for Node 0: 0 Dec 13 05:55:55.025639 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Dec 13 05:55:55.025650 kernel: Policy zone: DMA32 Dec 13 05:55:55.025662 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 05:55:55.025673 kernel: software IO TLB: area num 16. Dec 13 05:55:55.025685 kernel: Memory: 1901528K/2096616K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 194828K reserved, 0K cma-reserved) Dec 13 05:55:55.025696 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 05:55:55.025708 kernel: Kernel/User page tables isolation: enabled Dec 13 05:55:55.025719 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 05:55:55.025730 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 05:55:55.025746 kernel: Dynamic Preempt: voluntary Dec 13 05:55:55.025758 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 05:55:55.025770 kernel: rcu: RCU event tracing is enabled. Dec 13 05:55:55.025781 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 05:55:55.025793 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 05:55:55.025817 kernel: Rude variant of Tasks RCU enabled. Dec 13 05:55:55.025834 kernel: Tracing variant of Tasks RCU enabled. Dec 13 05:55:55.025855 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 05:55:55.025867 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 05:55:55.025879 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 13 05:55:55.025891 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 05:55:55.025914 kernel: Console: colour VGA+ 80x25 Dec 13 05:55:55.025930 kernel: printk: console [tty0] enabled Dec 13 05:55:55.025941 kernel: printk: console [ttyS0] enabled Dec 13 05:55:55.025951 kernel: ACPI: Core revision 20230628 Dec 13 05:55:55.025962 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 05:55:55.025973 kernel: x2apic enabled Dec 13 05:55:55.025988 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 05:55:55.025999 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Dec 13 05:55:55.026010 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Dec 13 05:55:55.026021 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 05:55:55.026032 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 05:55:55.026043 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 05:55:55.026053 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 05:55:55.026064 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 05:55:55.026075 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 05:55:55.026103 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 05:55:55.026113 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 05:55:55.026123 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 05:55:55.026160 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 05:55:55.026172 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 05:55:55.026183 kernel: MMIO Stale Data: Unknown: No mitigations Dec 13 05:55:55.026194 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 13 05:55:55.026204 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 05:55:55.026228 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 05:55:55.026239 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 05:55:55.026250 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 05:55:55.026267 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 05:55:55.026291 kernel: Freeing SMP alternatives memory: 32K Dec 13 05:55:55.026303 kernel: pid_max: default: 32768 minimum: 301 Dec 13 05:55:55.026314 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 05:55:55.026326 kernel: landlock: Up and running. Dec 13 05:55:55.026337 kernel: SELinux: Initializing. Dec 13 05:55:55.026349 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 05:55:55.026366 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 05:55:55.026378 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 13 05:55:55.026389 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 05:55:55.026402 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 05:55:55.026418 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 05:55:55.026430 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 13 05:55:55.026442 kernel: signal: max sigframe size: 1776 Dec 13 05:55:55.026453 kernel: rcu: Hierarchical SRCU implementation. Dec 13 05:55:55.026465 kernel: rcu: Max phase no-delay instances is 400. Dec 13 05:55:55.026490 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 05:55:55.026502 kernel: smp: Bringing up secondary CPUs ... Dec 13 05:55:55.026513 kernel: smpboot: x86: Booting SMP configuration: Dec 13 05:55:55.026525 kernel: .... node #0, CPUs: #1 Dec 13 05:55:55.026554 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 05:55:55.026575 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 05:55:55.026587 kernel: smpboot: Max logical packages: 16 Dec 13 05:55:55.026612 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Dec 13 05:55:55.026624 kernel: devtmpfs: initialized Dec 13 05:55:55.026635 kernel: x86/mm: Memory block size: 128MB Dec 13 05:55:55.026647 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 05:55:55.026659 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 05:55:55.026671 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 05:55:55.026688 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 05:55:55.026701 kernel: audit: initializing netlink subsys (disabled) Dec 13 05:55:55.026713 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 05:55:55.026725 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 05:55:55.026736 kernel: audit: type=2000 audit(1734069353.636:1): state=initialized audit_enabled=0 res=1 Dec 13 05:55:55.026748 kernel: cpuidle: using governor menu Dec 13 05:55:55.026760 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 05:55:55.026772 kernel: dca service started, version 1.12.1 Dec 13 05:55:55.026784 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 05:55:55.026801 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 05:55:55.026813 kernel: PCI: Using configuration type 1 for base access Dec 13 05:55:55.026825 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 05:55:55.026840 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 05:55:55.026852 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 05:55:55.026864 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 05:55:55.026875 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 05:55:55.026887 kernel: ACPI: Added _OSI(Module Device) Dec 13 05:55:55.026899 kernel: ACPI: Added _OSI(Processor Device) Dec 13 05:55:55.026916 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 05:55:55.026928 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 05:55:55.026940 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 05:55:55.026951 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 05:55:55.026963 kernel: ACPI: Interpreter enabled Dec 13 05:55:55.026975 kernel: ACPI: PM: (supports S0 S5) Dec 13 05:55:55.026987 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 05:55:55.026999 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 05:55:55.027010 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 05:55:55.027034 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 05:55:55.027046 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 05:55:55.027424 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 05:55:55.027645 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 05:55:55.027802 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 05:55:55.027821 kernel: PCI host bridge to bus 0000:00 Dec 13 05:55:55.028015 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 05:55:55.028203 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 05:55:55.028371 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 05:55:55.028523 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 13 05:55:55.028695 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 05:55:55.028835 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 13 05:55:55.029012 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 05:55:55.029755 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 05:55:55.029957 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Dec 13 05:55:55.030154 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Dec 13 05:55:55.030324 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Dec 13 05:55:55.030480 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Dec 13 05:55:55.030671 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 05:55:55.030843 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 05:55:55.031020 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Dec 13 05:55:55.031260 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 05:55:55.031414 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Dec 13 05:55:55.031591 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 05:55:55.031746 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Dec 13 05:55:55.031920 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 05:55:55.032116 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Dec 13 05:55:55.032355 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 05:55:55.032515 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Dec 13 05:55:55.032724 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 05:55:55.032910 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Dec 13 05:55:55.033159 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 05:55:55.033365 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Dec 13 05:55:55.033541 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 05:55:55.033707 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Dec 13 05:55:55.033871 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 05:55:55.034025 kernel: pci 0000:00:03.0: reg 0x10: [io 0xd0c0-0xd0df] Dec 13 05:55:55.034238 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Dec 13 05:55:55.034408 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 05:55:55.034593 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Dec 13 05:55:55.034757 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 05:55:55.034924 kernel: pci 0000:00:04.0: reg 0x10: [io 0xd000-0xd07f] Dec 13 05:55:55.035082 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Dec 13 05:55:55.035294 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Dec 13 05:55:55.035464 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 05:55:55.035656 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 05:55:55.035831 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 05:55:55.035987 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xd0e0-0xd0ff] Dec 13 05:55:55.036222 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Dec 13 05:55:55.036388 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 05:55:55.036540 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 05:55:55.036741 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Dec 13 05:55:55.036908 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Dec 13 05:55:55.037062 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 05:55:55.039264 kernel: pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] Dec 13 05:55:55.039436 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 05:55:55.039610 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 05:55:55.039780 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 05:55:55.039967 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Dec 13 05:55:55.040178 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Dec 13 05:55:55.040353 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 05:55:55.040510 kernel: pci 0000:01:00.0: bridge window [io 0xc000-0xcfff] Dec 13 05:55:55.040680 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 05:55:55.040837 kernel: pci 0000:01:00.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 05:55:55.041016 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 05:55:55.041199 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Dec 13 05:55:55.041360 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 05:55:55.041513 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 05:55:55.041679 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 05:55:55.041851 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 05:55:55.042012 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 05:55:55.044213 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 05:55:55.044389 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 05:55:55.044546 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 05:55:55.044718 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 05:55:55.044893 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 05:55:55.045046 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 05:55:55.045235 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 05:55:55.045398 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 05:55:55.045560 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 05:55:55.045725 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 05:55:55.045877 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 05:55:55.046028 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 05:55:55.047383 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 05:55:55.047548 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 05:55:55.047726 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 05:55:55.050274 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 05:55:55.050445 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 05:55:55.050668 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 05:55:55.050689 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 05:55:55.050702 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 05:55:55.050715 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 05:55:55.050727 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 05:55:55.050739 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 05:55:55.050752 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 05:55:55.050764 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 05:55:55.050776 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 05:55:55.050795 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 05:55:55.050808 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 05:55:55.050820 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 05:55:55.050844 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 05:55:55.050855 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 05:55:55.050867 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 05:55:55.050879 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 05:55:55.050903 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 05:55:55.050915 kernel: iommu: Default domain type: Translated Dec 13 05:55:55.050931 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 05:55:55.050942 kernel: PCI: Using ACPI for IRQ routing Dec 13 05:55:55.050954 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 05:55:55.050965 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 05:55:55.050988 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 13 05:55:55.051202 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 05:55:55.051363 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 05:55:55.051539 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 05:55:55.051576 kernel: vgaarb: loaded Dec 13 05:55:55.051589 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 05:55:55.051601 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 05:55:55.051614 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 05:55:55.051626 kernel: pnp: PnP ACPI init Dec 13 05:55:55.051786 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 05:55:55.051807 kernel: pnp: PnP ACPI: found 5 devices Dec 13 05:55:55.051819 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 05:55:55.051832 kernel: NET: Registered PF_INET protocol family Dec 13 05:55:55.051852 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 05:55:55.051864 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 05:55:55.051877 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 05:55:55.051902 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 05:55:55.051914 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 05:55:55.051934 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 05:55:55.051946 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 05:55:55.051958 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 05:55:55.051974 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 05:55:55.052002 kernel: NET: Registered PF_XDP protocol family Dec 13 05:55:55.052193 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 05:55:55.052348 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 05:55:55.052500 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 05:55:55.052671 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 05:55:55.052825 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 05:55:55.053006 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 05:55:55.055181 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 05:55:55.055373 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 05:55:55.055594 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 05:55:55.055758 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 05:55:55.055913 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 05:55:55.056066 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 05:55:55.056241 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 05:55:55.056423 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 05:55:55.056606 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 05:55:55.056765 kernel: pci 0000:01:00.0: bridge window [io 0xc000-0xcfff] Dec 13 05:55:55.056943 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 05:55:55.057131 kernel: pci 0000:01:00.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 05:55:55.059174 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 05:55:55.059346 kernel: pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] Dec 13 05:55:55.059505 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 05:55:55.059685 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 05:55:55.059841 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 05:55:55.060015 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x1fff] Dec 13 05:55:55.060247 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 05:55:55.060408 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 05:55:55.060593 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 05:55:55.060757 kernel: pci 0000:00:02.2: bridge window [io 0x2000-0x2fff] Dec 13 05:55:55.060911 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 05:55:55.061068 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 05:55:55.061267 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 05:55:55.062799 kernel: pci 0000:00:02.3: bridge window [io 0x3000-0x3fff] Dec 13 05:55:55.062959 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 05:55:55.063113 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 05:55:55.064276 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 05:55:55.064464 kernel: pci 0000:00:02.4: bridge window [io 0x4000-0x4fff] Dec 13 05:55:55.064640 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 05:55:55.064792 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 05:55:55.064958 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 05:55:55.066979 kernel: pci 0000:00:02.5: bridge window [io 0x5000-0x5fff] Dec 13 05:55:55.067165 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 05:55:55.067336 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 05:55:55.067495 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 05:55:55.067672 kernel: pci 0000:00:02.6: bridge window [io 0x6000-0x6fff] Dec 13 05:55:55.067825 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 05:55:55.068039 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 05:55:55.068236 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 05:55:55.068386 kernel: pci 0000:00:02.7: bridge window [io 0x7000-0x7fff] Dec 13 05:55:55.068545 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 05:55:55.068722 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 05:55:55.068884 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 05:55:55.069026 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 05:55:55.071217 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 05:55:55.071371 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 13 05:55:55.071540 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 05:55:55.071999 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 13 05:55:55.072210 kernel: pci_bus 0000:01: resource 0 [io 0xc000-0xcfff] Dec 13 05:55:55.072400 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 13 05:55:55.072567 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 05:55:55.072733 kernel: pci_bus 0000:02: resource 0 [io 0xc000-0xcfff] Dec 13 05:55:55.072908 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 05:55:55.073063 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 05:55:55.075350 kernel: pci_bus 0000:03: resource 0 [io 0x1000-0x1fff] Dec 13 05:55:55.075631 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 05:55:55.075791 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 05:55:55.076022 kernel: pci_bus 0000:04: resource 0 [io 0x2000-0x2fff] Dec 13 05:55:55.076242 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 05:55:55.076421 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 05:55:55.076623 kernel: pci_bus 0000:05: resource 0 [io 0x3000-0x3fff] Dec 13 05:55:55.076774 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 05:55:55.076925 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 05:55:55.078219 kernel: pci_bus 0000:06: resource 0 [io 0x4000-0x4fff] Dec 13 05:55:55.078408 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 05:55:55.078603 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 05:55:55.078776 kernel: pci_bus 0000:07: resource 0 [io 0x5000-0x5fff] Dec 13 05:55:55.078950 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 05:55:55.079092 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 05:55:55.079322 kernel: pci_bus 0000:08: resource 0 [io 0x6000-0x6fff] Dec 13 05:55:55.079503 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 05:55:55.079661 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 05:55:55.079817 kernel: pci_bus 0000:09: resource 0 [io 0x7000-0x7fff] Dec 13 05:55:55.079963 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 05:55:55.080168 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 05:55:55.080190 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 05:55:55.080219 kernel: PCI: CLS 0 bytes, default 64 Dec 13 05:55:55.080232 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 05:55:55.080245 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Dec 13 05:55:55.080258 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 05:55:55.080271 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Dec 13 05:55:55.080284 kernel: Initialise system trusted keyrings Dec 13 05:55:55.080297 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 05:55:55.080318 kernel: Key type asymmetric registered Dec 13 05:55:55.080330 kernel: Asymmetric key parser 'x509' registered Dec 13 05:55:55.080348 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 05:55:55.080361 kernel: io scheduler mq-deadline registered Dec 13 05:55:55.080383 kernel: io scheduler kyber registered Dec 13 05:55:55.080396 kernel: io scheduler bfq registered Dec 13 05:55:55.080605 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 05:55:55.080763 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 05:55:55.080919 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:55:55.081078 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 05:55:55.081300 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 05:55:55.081466 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:55:55.081647 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 05:55:55.081808 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 05:55:55.081992 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:55:55.082208 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 05:55:55.082385 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 05:55:55.082546 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:55:55.082714 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 05:55:55.082867 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 05:55:55.083037 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:55:55.083252 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 05:55:55.083441 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 05:55:55.083608 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:55:55.083766 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 05:55:55.083919 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 05:55:55.084074 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:55:55.084287 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 05:55:55.084458 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 05:55:55.084626 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:55:55.084647 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 05:55:55.084662 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 05:55:55.084675 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 05:55:55.084688 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 05:55:55.084701 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 05:55:55.084713 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 05:55:55.084733 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 05:55:55.084746 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 05:55:55.084912 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 05:55:55.084931 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 05:55:55.085087 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 05:55:55.085264 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T05:55:54 UTC (1734069354) Dec 13 05:55:55.085428 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 05:55:55.085454 kernel: intel_pstate: CPU model not supported Dec 13 05:55:55.085467 kernel: NET: Registered PF_INET6 protocol family Dec 13 05:55:55.085480 kernel: Segment Routing with IPv6 Dec 13 05:55:55.085493 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 05:55:55.085506 kernel: NET: Registered PF_PACKET protocol family Dec 13 05:55:55.085519 kernel: Key type dns_resolver registered Dec 13 05:55:55.085531 kernel: IPI shorthand broadcast: enabled Dec 13 05:55:55.085544 kernel: sched_clock: Marking stable (1139003685, 231311292)->(1589682566, -219367589) Dec 13 05:55:55.085568 kernel: registered taskstats version 1 Dec 13 05:55:55.085581 kernel: Loading compiled-in X.509 certificates Dec 13 05:55:55.085600 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 05:55:55.085612 kernel: Key type .fscrypt registered Dec 13 05:55:55.085625 kernel: Key type fscrypt-provisioning registered Dec 13 05:55:55.085638 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 05:55:55.085650 kernel: ima: Allocated hash algorithm: sha1 Dec 13 05:55:55.085663 kernel: ima: No architecture policies found Dec 13 05:55:55.085676 kernel: clk: Disabling unused clocks Dec 13 05:55:55.085688 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 05:55:55.085706 kernel: Write protecting the kernel read-only data: 36864k Dec 13 05:55:55.085719 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 05:55:55.085731 kernel: Run /init as init process Dec 13 05:55:55.085744 kernel: with arguments: Dec 13 05:55:55.085756 kernel: /init Dec 13 05:55:55.085769 kernel: with environment: Dec 13 05:55:55.085781 kernel: HOME=/ Dec 13 05:55:55.085793 kernel: TERM=linux Dec 13 05:55:55.085806 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 05:55:55.085821 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 05:55:55.085849 systemd[1]: Detected virtualization kvm. Dec 13 05:55:55.085863 systemd[1]: Detected architecture x86-64. Dec 13 05:55:55.085876 systemd[1]: Running in initrd. Dec 13 05:55:55.085889 systemd[1]: No hostname configured, using default hostname. Dec 13 05:55:55.085902 systemd[1]: Hostname set to . Dec 13 05:55:55.085917 systemd[1]: Initializing machine ID from VM UUID. Dec 13 05:55:55.085935 systemd[1]: Queued start job for default target initrd.target. Dec 13 05:55:55.085949 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 05:55:55.085963 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 05:55:55.085981 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 05:55:55.085995 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 05:55:55.086009 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 05:55:55.086023 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 05:55:55.086039 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 05:55:55.086058 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 05:55:55.086072 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 05:55:55.086086 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 05:55:55.086100 systemd[1]: Reached target paths.target - Path Units. Dec 13 05:55:55.086139 systemd[1]: Reached target slices.target - Slice Units. Dec 13 05:55:55.086154 systemd[1]: Reached target swap.target - Swaps. Dec 13 05:55:55.086168 systemd[1]: Reached target timers.target - Timer Units. Dec 13 05:55:55.086182 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 05:55:55.086202 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 05:55:55.086216 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 05:55:55.086230 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 05:55:55.086244 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 05:55:55.086258 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 05:55:55.086272 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 05:55:55.086285 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 05:55:55.086299 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 05:55:55.086318 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 05:55:55.086332 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 05:55:55.086346 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 05:55:55.086372 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 05:55:55.086385 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 05:55:55.086399 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 05:55:55.086453 systemd-journald[201]: Collecting audit messages is disabled. Dec 13 05:55:55.086490 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 05:55:55.086504 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 05:55:55.086517 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 05:55:55.086558 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 05:55:55.086574 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 05:55:55.086587 kernel: Bridge firewalling registered Dec 13 05:55:55.086602 systemd-journald[201]: Journal started Dec 13 05:55:55.086627 systemd-journald[201]: Runtime Journal (/run/log/journal/bdae4da6bcf649deb329131203a6192c) is 4.7M, max 38.0M, 33.2M free. Dec 13 05:55:55.027529 systemd-modules-load[202]: Inserted module 'overlay' Dec 13 05:55:55.078389 systemd-modules-load[202]: Inserted module 'br_netfilter' Dec 13 05:55:55.144121 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 05:55:55.145534 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 05:55:55.153482 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 05:55:55.154590 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 05:55:55.164321 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 05:55:55.173377 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 05:55:55.176322 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 05:55:55.188162 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 05:55:55.191954 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 05:55:55.201394 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 05:55:55.209700 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 05:55:55.216348 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 05:55:55.217421 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 05:55:55.230344 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 05:55:55.242148 dracut-cmdline[232]: dracut-dracut-053 Dec 13 05:55:55.247719 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 05:55:55.276525 systemd-resolved[234]: Positive Trust Anchors: Dec 13 05:55:55.276579 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 05:55:55.276620 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 05:55:55.285811 systemd-resolved[234]: Defaulting to hostname 'linux'. Dec 13 05:55:55.288013 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 05:55:55.289735 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 05:55:55.349190 kernel: SCSI subsystem initialized Dec 13 05:55:55.360297 kernel: Loading iSCSI transport class v2.0-870. Dec 13 05:55:55.374147 kernel: iscsi: registered transport (tcp) Dec 13 05:55:55.399210 kernel: iscsi: registered transport (qla4xxx) Dec 13 05:55:55.399303 kernel: QLogic iSCSI HBA Driver Dec 13 05:55:55.450043 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 05:55:55.456307 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 05:55:55.489689 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 05:55:55.489742 kernel: device-mapper: uevent: version 1.0.3 Dec 13 05:55:55.490444 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 05:55:55.537252 kernel: raid6: sse2x4 gen() 13595 MB/s Dec 13 05:55:55.555193 kernel: raid6: sse2x2 gen() 9836 MB/s Dec 13 05:55:55.573774 kernel: raid6: sse2x1 gen() 10521 MB/s Dec 13 05:55:55.573837 kernel: raid6: using algorithm sse2x4 gen() 13595 MB/s Dec 13 05:55:55.592632 kernel: raid6: .... xor() 8244 MB/s, rmw enabled Dec 13 05:55:55.592700 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 05:55:55.618171 kernel: xor: automatically using best checksumming function avx Dec 13 05:55:55.803188 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 05:55:55.817864 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 05:55:55.825377 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 05:55:55.849756 systemd-udevd[418]: Using default interface naming scheme 'v255'. Dec 13 05:55:55.856632 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 05:55:55.864310 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 05:55:55.886581 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Dec 13 05:55:55.925126 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 05:55:55.932295 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 05:55:56.035784 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 05:55:56.042280 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 05:55:56.074825 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 05:55:56.078071 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 05:55:56.079732 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 05:55:56.081009 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 05:55:56.090310 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 05:55:56.113301 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 05:55:56.164290 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Dec 13 05:55:56.231825 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 05:55:56.231851 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 05:55:56.232035 kernel: AVX version of gcm_enc/dec engaged. Dec 13 05:55:56.232055 kernel: AES CTR mode by8 optimization enabled Dec 13 05:55:56.232072 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 05:55:56.232088 kernel: GPT:17805311 != 125829119 Dec 13 05:55:56.232127 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 05:55:56.232167 kernel: GPT:17805311 != 125829119 Dec 13 05:55:56.232187 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 05:55:56.232204 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 05:55:56.232220 kernel: ACPI: bus type USB registered Dec 13 05:55:56.232237 kernel: usbcore: registered new interface driver usbfs Dec 13 05:55:56.232254 kernel: usbcore: registered new interface driver hub Dec 13 05:55:56.199691 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 05:55:56.199848 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 05:55:56.236129 kernel: usbcore: registered new device driver usb Dec 13 05:55:56.200777 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 05:55:56.206080 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 05:55:56.206273 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 05:55:56.206986 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 05:55:56.211382 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 05:55:56.308131 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (469) Dec 13 05:55:56.315164 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (470) Dec 13 05:55:56.316399 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 05:55:56.402183 kernel: libata version 3.00 loaded. Dec 13 05:55:56.402220 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 05:55:56.402544 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 13 05:55:56.402743 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 05:55:56.402974 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 05:55:56.403192 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 13 05:55:56.403378 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 13 05:55:56.403576 kernel: hub 1-0:1.0: USB hub found Dec 13 05:55:56.403809 kernel: hub 1-0:1.0: 4 ports detected Dec 13 05:55:56.404018 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 05:55:56.404319 kernel: hub 2-0:1.0: USB hub found Dec 13 05:55:56.404618 kernel: hub 2-0:1.0: 4 ports detected Dec 13 05:55:56.404819 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 05:55:56.405019 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 05:55:56.405049 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 05:55:56.405252 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 05:55:56.405447 kernel: scsi host0: ahci Dec 13 05:55:56.405679 kernel: scsi host1: ahci Dec 13 05:55:56.405871 kernel: scsi host2: ahci Dec 13 05:55:56.406098 kernel: scsi host3: ahci Dec 13 05:55:56.406304 kernel: scsi host4: ahci Dec 13 05:55:56.406551 kernel: scsi host5: ahci Dec 13 05:55:56.406732 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Dec 13 05:55:56.406753 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Dec 13 05:55:56.406778 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Dec 13 05:55:56.406796 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Dec 13 05:55:56.406813 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Dec 13 05:55:56.406830 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Dec 13 05:55:56.385154 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 05:55:56.409863 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 05:55:56.421706 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 05:55:56.427460 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 05:55:56.428350 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 05:55:56.440611 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 05:55:56.444449 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 05:55:56.449721 disk-uuid[564]: Primary Header is updated. Dec 13 05:55:56.449721 disk-uuid[564]: Secondary Entries is updated. Dec 13 05:55:56.449721 disk-uuid[564]: Secondary Header is updated. Dec 13 05:55:56.456286 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 05:55:56.463444 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 05:55:56.487984 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 05:55:56.578595 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 05:55:56.697174 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 05:55:56.697290 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 05:55:56.698590 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 05:55:56.700315 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 05:55:56.702808 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 05:55:56.703456 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 05:55:56.723169 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 05:55:56.730495 kernel: usbcore: registered new interface driver usbhid Dec 13 05:55:56.730539 kernel: usbhid: USB HID core driver Dec 13 05:55:56.740128 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 13 05:55:56.740174 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 13 05:55:57.465398 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 05:55:57.465779 disk-uuid[565]: The operation has completed successfully. Dec 13 05:55:57.516194 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 05:55:57.517351 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 05:55:57.534336 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 05:55:57.550102 sh[584]: Success Dec 13 05:55:57.566177 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Dec 13 05:55:57.637424 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 05:55:57.640244 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 05:55:57.641279 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 05:55:57.663635 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 05:55:57.663686 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 05:55:57.665635 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 05:55:57.669046 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 05:55:57.669080 kernel: BTRFS info (device dm-0): using free space tree Dec 13 05:55:57.678332 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 05:55:57.679627 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 05:55:57.687297 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 05:55:57.689651 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 05:55:57.703155 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 05:55:57.706753 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 05:55:57.706784 kernel: BTRFS info (device vda6): using free space tree Dec 13 05:55:57.712150 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 05:55:57.725203 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 05:55:57.727636 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 05:55:57.734360 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 05:55:57.740296 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 05:55:57.838583 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 05:55:57.848432 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 05:55:57.879049 ignition[673]: Ignition 2.19.0 Dec 13 05:55:57.879098 ignition[673]: Stage: fetch-offline Dec 13 05:55:57.879227 ignition[673]: no configs at "/usr/lib/ignition/base.d" Dec 13 05:55:57.879250 ignition[673]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:55:57.879439 ignition[673]: parsed url from cmdline: "" Dec 13 05:55:57.883186 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 05:55:57.879449 ignition[673]: no config URL provided Dec 13 05:55:57.879459 ignition[673]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 05:55:57.879474 ignition[673]: no config at "/usr/lib/ignition/user.ign" Dec 13 05:55:57.879494 ignition[673]: failed to fetch config: resource requires networking Dec 13 05:55:57.879796 ignition[673]: Ignition finished successfully Dec 13 05:55:57.891744 systemd-networkd[768]: lo: Link UP Dec 13 05:55:57.891760 systemd-networkd[768]: lo: Gained carrier Dec 13 05:55:57.893899 systemd-networkd[768]: Enumeration completed Dec 13 05:55:57.894034 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 05:55:57.894998 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 05:55:57.895003 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 05:55:57.900057 systemd[1]: Reached target network.target - Network. Dec 13 05:55:57.901942 systemd-networkd[768]: eth0: Link UP Dec 13 05:55:57.901949 systemd-networkd[768]: eth0: Gained carrier Dec 13 05:55:57.901994 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 05:55:57.909358 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 05:55:57.923259 systemd-networkd[768]: eth0: DHCPv4 address 10.243.75.98/30, gateway 10.243.75.97 acquired from 10.243.75.97 Dec 13 05:55:57.933842 ignition[776]: Ignition 2.19.0 Dec 13 05:55:57.933858 ignition[776]: Stage: fetch Dec 13 05:55:57.934088 ignition[776]: no configs at "/usr/lib/ignition/base.d" Dec 13 05:55:57.935033 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:55:57.935208 ignition[776]: parsed url from cmdline: "" Dec 13 05:55:57.935215 ignition[776]: no config URL provided Dec 13 05:55:57.935225 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 05:55:57.935240 ignition[776]: no config at "/usr/lib/ignition/user.ign" Dec 13 05:55:57.935397 ignition[776]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 05:55:57.935459 ignition[776]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 05:55:57.935602 ignition[776]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 05:55:57.950540 ignition[776]: GET result: OK Dec 13 05:55:57.951174 ignition[776]: parsing config with SHA512: 85a646da9244231a0592bf08a1b3eec2734cfe80933047f6e0c59e7c5b1b1e8a18adad8e09108caaa40dd9e099460c9c64b64151d10274e17dffe9a6e90d6214 Dec 13 05:55:57.954755 unknown[776]: fetched base config from "system" Dec 13 05:55:57.955081 ignition[776]: fetch: fetch complete Dec 13 05:55:57.954767 unknown[776]: fetched base config from "system" Dec 13 05:55:57.955089 ignition[776]: fetch: fetch passed Dec 13 05:55:57.954776 unknown[776]: fetched user config from "openstack" Dec 13 05:55:57.955175 ignition[776]: Ignition finished successfully Dec 13 05:55:57.957576 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 05:55:57.964351 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 05:55:57.984213 ignition[784]: Ignition 2.19.0 Dec 13 05:55:57.984230 ignition[784]: Stage: kargs Dec 13 05:55:57.984511 ignition[784]: no configs at "/usr/lib/ignition/base.d" Dec 13 05:55:57.984530 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:55:57.987961 ignition[784]: kargs: kargs passed Dec 13 05:55:57.990163 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 05:55:57.988041 ignition[784]: Ignition finished successfully Dec 13 05:55:57.997339 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 05:55:58.016718 ignition[790]: Ignition 2.19.0 Dec 13 05:55:58.016736 ignition[790]: Stage: disks Dec 13 05:55:58.016965 ignition[790]: no configs at "/usr/lib/ignition/base.d" Dec 13 05:55:58.019309 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 05:55:58.016983 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:55:58.021597 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 05:55:58.017823 ignition[790]: disks: disks passed Dec 13 05:55:58.022856 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 05:55:58.017886 ignition[790]: Ignition finished successfully Dec 13 05:55:58.024527 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 05:55:58.026076 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 05:55:58.027301 systemd[1]: Reached target basic.target - Basic System. Dec 13 05:55:58.036309 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 05:55:58.054601 systemd-fsck[798]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 05:55:58.057828 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 05:55:58.064208 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 05:55:58.178143 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 05:55:58.179284 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 05:55:58.181359 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 05:55:58.187228 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 05:55:58.191255 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 05:55:58.193721 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 05:55:58.196323 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 13 05:55:58.197278 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 05:55:58.213113 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (806) Dec 13 05:55:58.213187 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 05:55:58.213215 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 05:55:58.213239 kernel: BTRFS info (device vda6): using free space tree Dec 13 05:55:58.197370 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 05:55:58.212420 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 05:55:58.222328 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 05:55:58.227304 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 05:55:58.230346 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 05:55:58.296457 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 05:55:58.304155 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Dec 13 05:55:58.311006 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 05:55:58.319412 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 05:55:58.420838 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 05:55:58.427240 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 05:55:58.439405 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 05:55:58.451167 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 05:55:58.470495 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 05:55:58.481664 ignition[925]: INFO : Ignition 2.19.0 Dec 13 05:55:58.482767 ignition[925]: INFO : Stage: mount Dec 13 05:55:58.483413 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 05:55:58.483413 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:55:58.486264 ignition[925]: INFO : mount: mount passed Dec 13 05:55:58.486264 ignition[925]: INFO : Ignition finished successfully Dec 13 05:55:58.486192 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 05:55:58.662253 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 05:55:59.624991 systemd-networkd[768]: eth0: Gained IPv6LL Dec 13 05:56:01.133221 systemd-networkd[768]: eth0: Ignoring DHCPv6 address 2a02:1348:17c:d2d8:24:19ff:fef3:4b62/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17c:d2d8:24:19ff:fef3:4b62/64 assigned by NDisc. Dec 13 05:56:01.133234 systemd-networkd[768]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 05:56:05.363289 coreos-metadata[808]: Dec 13 05:56:05.363 WARN failed to locate config-drive, using the metadata service API instead Dec 13 05:56:05.386480 coreos-metadata[808]: Dec 13 05:56:05.386 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 05:56:05.399606 coreos-metadata[808]: Dec 13 05:56:05.399 INFO Fetch successful Dec 13 05:56:05.400495 coreos-metadata[808]: Dec 13 05:56:05.400 INFO wrote hostname srv-e5p2w.gb1.brightbox.com to /sysroot/etc/hostname Dec 13 05:56:05.401953 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 05:56:05.402167 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 13 05:56:05.414665 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 05:56:05.422711 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 05:56:05.438142 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (940) Dec 13 05:56:05.441931 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 05:56:05.441971 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 05:56:05.443856 kernel: BTRFS info (device vda6): using free space tree Dec 13 05:56:05.449159 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 05:56:05.451581 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 05:56:05.484215 ignition[958]: INFO : Ignition 2.19.0 Dec 13 05:56:05.484215 ignition[958]: INFO : Stage: files Dec 13 05:56:05.487653 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 05:56:05.487653 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:56:05.487653 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Dec 13 05:56:05.490399 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 05:56:05.490399 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 05:56:05.492357 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 05:56:05.493313 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 05:56:05.493313 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 05:56:05.492943 unknown[958]: wrote ssh authorized keys file for user: core Dec 13 05:56:05.496265 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 05:56:05.496265 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 05:56:05.496265 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 05:56:05.496265 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 05:56:05.496265 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 05:56:05.496265 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 05:56:05.496265 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 05:56:05.496265 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 05:56:06.072737 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 05:56:07.032531 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 05:56:07.036058 ignition[958]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 05:56:07.036058 ignition[958]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 05:56:07.036058 ignition[958]: INFO : files: files passed Dec 13 05:56:07.036058 ignition[958]: INFO : Ignition finished successfully Dec 13 05:56:07.037341 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 05:56:07.048541 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 05:56:07.050307 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 05:56:07.054355 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 05:56:07.054490 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 05:56:07.074691 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 05:56:07.074691 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 05:56:07.078065 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 05:56:07.080645 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 05:56:07.081662 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 05:56:07.098745 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 05:56:07.133613 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 05:56:07.133775 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 05:56:07.135801 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 05:56:07.136986 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 05:56:07.138588 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 05:56:07.143318 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 05:56:07.162429 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 05:56:07.170338 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 05:56:07.184174 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 05:56:07.185895 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 05:56:07.186747 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 05:56:07.187518 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 05:56:07.187675 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 05:56:07.189542 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 05:56:07.190409 systemd[1]: Stopped target basic.target - Basic System. Dec 13 05:56:07.191677 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 05:56:07.193211 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 05:56:07.194672 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 05:56:07.195999 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 05:56:07.197510 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 05:56:07.199130 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 05:56:07.200587 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 05:56:07.201993 systemd[1]: Stopped target swap.target - Swaps. Dec 13 05:56:07.203533 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 05:56:07.203741 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 05:56:07.205488 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 05:56:07.206378 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 05:56:07.207634 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 05:56:07.207792 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 05:56:07.209060 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 05:56:07.209253 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 05:56:07.211086 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 05:56:07.211283 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 05:56:07.213033 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 05:56:07.213197 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 05:56:07.220347 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 05:56:07.220992 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 05:56:07.221186 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 05:56:07.225323 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 05:56:07.231721 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 05:56:07.231929 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 05:56:07.235688 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 05:56:07.235849 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 05:56:07.247460 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 05:56:07.249155 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 05:56:07.254946 ignition[1011]: INFO : Ignition 2.19.0 Dec 13 05:56:07.257223 ignition[1011]: INFO : Stage: umount Dec 13 05:56:07.257223 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 05:56:07.257223 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:56:07.257223 ignition[1011]: INFO : umount: umount passed Dec 13 05:56:07.257223 ignition[1011]: INFO : Ignition finished successfully Dec 13 05:56:07.255714 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 05:56:07.259092 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 05:56:07.259313 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 05:56:07.260789 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 05:56:07.260883 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 05:56:07.261828 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 05:56:07.261902 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 05:56:07.263145 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 05:56:07.263219 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 05:56:07.264401 systemd[1]: Stopped target network.target - Network. Dec 13 05:56:07.266716 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 05:56:07.266784 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 05:56:07.267525 systemd[1]: Stopped target paths.target - Path Units. Dec 13 05:56:07.270176 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 05:56:07.279290 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 05:56:07.280313 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 05:56:07.280892 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 05:56:07.281598 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 05:56:07.281662 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 05:56:07.283164 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 05:56:07.283230 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 05:56:07.284468 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 05:56:07.284536 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 05:56:07.285876 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 05:56:07.285938 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 05:56:07.287753 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 05:56:07.290950 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 05:56:07.296308 systemd-networkd[768]: eth0: DHCPv6 lease lost Dec 13 05:56:07.297933 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 05:56:07.298088 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 05:56:07.301845 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 05:56:07.301914 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 05:56:07.309298 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 05:56:07.309983 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 05:56:07.310054 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 05:56:07.313354 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 05:56:07.314446 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 05:56:07.314605 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 05:56:07.323775 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 05:56:07.324891 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 05:56:07.326355 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 05:56:07.326504 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 05:56:07.329696 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 05:56:07.329782 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 05:56:07.331351 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 05:56:07.331405 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 05:56:07.332794 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 05:56:07.332861 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 05:56:07.334870 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 05:56:07.334937 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 05:56:07.336247 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 05:56:07.336335 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 05:56:07.347337 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 05:56:07.349493 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 05:56:07.349573 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 05:56:07.350302 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 05:56:07.350364 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 05:56:07.351058 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 05:56:07.353304 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 05:56:07.354348 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 05:56:07.354412 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 05:56:07.355167 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 05:56:07.355226 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 05:56:07.357302 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 05:56:07.357450 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 05:56:07.368651 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 05:56:07.368821 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 05:56:07.370397 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 05:56:07.371405 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 05:56:07.371472 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 05:56:07.381368 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 05:56:07.390097 systemd[1]: Switching root. Dec 13 05:56:07.425714 systemd-journald[201]: Journal stopped Dec 13 05:56:08.730690 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Dec 13 05:56:08.730781 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 05:56:08.730810 kernel: SELinux: policy capability open_perms=1 Dec 13 05:56:08.730830 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 05:56:08.730854 kernel: SELinux: policy capability always_check_network=0 Dec 13 05:56:08.730877 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 05:56:08.730909 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 05:56:08.730935 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 05:56:08.730959 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 05:56:08.730977 kernel: audit: type=1403 audit(1734069367.643:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 05:56:08.731003 systemd[1]: Successfully loaded SELinux policy in 50.035ms. Dec 13 05:56:08.731030 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.630ms. Dec 13 05:56:08.731053 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 05:56:08.731073 systemd[1]: Detected virtualization kvm. Dec 13 05:56:08.731092 systemd[1]: Detected architecture x86-64. Dec 13 05:56:08.735580 systemd[1]: Detected first boot. Dec 13 05:56:08.735608 systemd[1]: Hostname set to . Dec 13 05:56:08.735649 systemd[1]: Initializing machine ID from VM UUID. Dec 13 05:56:08.735671 zram_generator::config[1057]: No configuration found. Dec 13 05:56:08.735699 systemd[1]: Populated /etc with preset unit settings. Dec 13 05:56:08.735730 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 05:56:08.735758 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 05:56:08.735777 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 05:56:08.735798 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 05:56:08.735817 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 05:56:08.735836 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 05:56:08.735855 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 05:56:08.735875 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 05:56:08.735906 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 05:56:08.735926 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 05:56:08.735946 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 05:56:08.735971 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 05:56:08.735992 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 05:56:08.736012 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 05:56:08.736036 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 05:56:08.736056 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 05:56:08.736082 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 05:56:08.742217 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 05:56:08.742268 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 05:56:08.742297 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 05:56:08.742318 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 05:56:08.742343 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 05:56:08.742363 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 05:56:08.742396 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 05:56:08.742417 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 05:56:08.742436 systemd[1]: Reached target slices.target - Slice Units. Dec 13 05:56:08.742462 systemd[1]: Reached target swap.target - Swaps. Dec 13 05:56:08.742482 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 05:56:08.742500 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 05:56:08.742520 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 05:56:08.742539 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 05:56:08.742557 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 05:56:08.742575 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 05:56:08.742606 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 05:56:08.742626 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 05:56:08.742645 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 05:56:08.742663 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:56:08.742682 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 05:56:08.742702 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 05:56:08.742721 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 05:56:08.742741 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 05:56:08.742772 systemd[1]: Reached target machines.target - Containers. Dec 13 05:56:08.742798 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 05:56:08.742819 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 05:56:08.742856 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 05:56:08.742894 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 05:56:08.742926 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 05:56:08.742947 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 05:56:08.742966 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 05:56:08.742985 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 05:56:08.743003 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 05:56:08.743022 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 05:56:08.743041 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 05:56:08.743062 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 05:56:08.743081 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 05:56:08.743131 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 05:56:08.743153 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 05:56:08.743172 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 05:56:08.743191 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 05:56:08.743210 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 05:56:08.743240 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 05:56:08.743261 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 05:56:08.743287 systemd[1]: Stopped verity-setup.service. Dec 13 05:56:08.743308 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:56:08.743342 kernel: loop: module loaded Dec 13 05:56:08.743368 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 05:56:08.743389 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 05:56:08.743409 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 05:56:08.743428 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 05:56:08.743460 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 05:56:08.743481 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 05:56:08.743512 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 05:56:08.743529 kernel: fuse: init (API version 7.39) Dec 13 05:56:08.743547 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 05:56:08.743599 systemd-journald[1146]: Collecting audit messages is disabled. Dec 13 05:56:08.743641 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 05:56:08.743686 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 05:56:08.743708 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 05:56:08.743727 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 05:56:08.743747 systemd-journald[1146]: Journal started Dec 13 05:56:08.743795 systemd-journald[1146]: Runtime Journal (/run/log/journal/bdae4da6bcf649deb329131203a6192c) is 4.7M, max 38.0M, 33.2M free. Dec 13 05:56:08.358923 systemd[1]: Queued start job for default target multi-user.target. Dec 13 05:56:08.375441 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 05:56:08.746131 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 05:56:08.376028 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 05:56:08.748650 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 05:56:08.748869 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 05:56:08.750426 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 05:56:08.750640 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 05:56:08.751794 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 05:56:08.751981 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 05:56:08.753201 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 05:56:08.754313 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 05:56:08.755389 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 05:56:08.769812 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 05:56:08.780186 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 05:56:08.786168 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 05:56:08.789197 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 05:56:08.789251 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 05:56:08.791174 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 05:56:08.800479 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 05:56:08.806341 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 05:56:08.807213 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 05:56:08.814963 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 05:56:08.819315 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 05:56:08.820103 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 05:56:08.833361 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 05:56:08.835763 kernel: ACPI: bus type drm_connector registered Dec 13 05:56:08.837369 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 05:56:08.840433 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 05:56:08.843988 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 05:56:08.848618 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 05:56:08.851848 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 05:56:08.852123 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 05:56:08.853074 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 05:56:08.855172 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 05:56:08.856426 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 05:56:08.858886 systemd-journald[1146]: Time spent on flushing to /var/log/journal/bdae4da6bcf649deb329131203a6192c is 85.584ms for 1127 entries. Dec 13 05:56:08.858886 systemd-journald[1146]: System Journal (/var/log/journal/bdae4da6bcf649deb329131203a6192c) is 8.0M, max 584.8M, 576.8M free. Dec 13 05:56:08.985414 systemd-journald[1146]: Received client request to flush runtime journal. Dec 13 05:56:08.985485 kernel: loop0: detected capacity change from 0 to 142488 Dec 13 05:56:08.873698 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 05:56:08.874863 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 05:56:08.883301 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 05:56:08.972043 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 05:56:08.974281 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 05:56:08.975588 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 05:56:08.993388 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 05:56:09.004815 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 05:56:09.028287 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 05:56:09.036144 kernel: loop1: detected capacity change from 0 to 205544 Dec 13 05:56:09.042546 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 05:56:09.082361 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 05:56:09.094331 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 05:56:09.113204 kernel: loop2: detected capacity change from 0 to 140768 Dec 13 05:56:09.108192 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Dec 13 05:56:09.108247 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Dec 13 05:56:09.122584 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 05:56:09.139311 udevadm[1207]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 05:56:09.159183 kernel: loop3: detected capacity change from 0 to 8 Dec 13 05:56:09.185131 kernel: loop4: detected capacity change from 0 to 142488 Dec 13 05:56:09.211176 kernel: loop5: detected capacity change from 0 to 205544 Dec 13 05:56:09.234195 kernel: loop6: detected capacity change from 0 to 140768 Dec 13 05:56:09.265787 kernel: loop7: detected capacity change from 0 to 8 Dec 13 05:56:09.267295 (sd-merge)[1211]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Dec 13 05:56:09.268098 (sd-merge)[1211]: Merged extensions into '/usr'. Dec 13 05:56:09.277932 systemd[1]: Reloading requested from client PID 1185 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 05:56:09.278269 systemd[1]: Reloading... Dec 13 05:56:09.424133 zram_generator::config[1237]: No configuration found. Dec 13 05:56:09.591046 ldconfig[1180]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 05:56:09.684064 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 05:56:09.748641 systemd[1]: Reloading finished in 469 ms. Dec 13 05:56:09.776539 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 05:56:09.778079 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 05:56:09.792361 systemd[1]: Starting ensure-sysext.service... Dec 13 05:56:09.798291 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 05:56:09.812361 systemd[1]: Reloading requested from client PID 1293 ('systemctl') (unit ensure-sysext.service)... Dec 13 05:56:09.812387 systemd[1]: Reloading... Dec 13 05:56:09.836018 systemd-tmpfiles[1294]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 05:56:09.837356 systemd-tmpfiles[1294]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 05:56:09.838839 systemd-tmpfiles[1294]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 05:56:09.840524 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Dec 13 05:56:09.840795 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Dec 13 05:56:09.846313 systemd-tmpfiles[1294]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 05:56:09.846479 systemd-tmpfiles[1294]: Skipping /boot Dec 13 05:56:09.868752 systemd-tmpfiles[1294]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 05:56:09.868934 systemd-tmpfiles[1294]: Skipping /boot Dec 13 05:56:09.918203 zram_generator::config[1324]: No configuration found. Dec 13 05:56:10.089990 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 05:56:10.154739 systemd[1]: Reloading finished in 341 ms. Dec 13 05:56:10.182409 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 05:56:10.187777 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 05:56:10.205323 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 05:56:10.221384 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 05:56:10.226215 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 05:56:10.235363 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 05:56:10.240946 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 05:56:10.245257 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 05:56:10.253176 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:56:10.253499 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 05:56:10.260471 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 05:56:10.268472 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 05:56:10.279481 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 05:56:10.280372 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 05:56:10.286313 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 05:56:10.287029 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:56:10.290411 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:56:10.290665 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 05:56:10.290891 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 05:56:10.291032 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:56:10.295907 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:56:10.298253 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 05:56:10.303430 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 05:56:10.306300 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 05:56:10.306484 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:56:10.312369 systemd[1]: Finished ensure-sysext.service. Dec 13 05:56:10.318708 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 05:56:10.329305 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 05:56:10.339369 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 05:56:10.342468 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 05:56:10.342698 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 05:56:10.343797 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 05:56:10.346165 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 05:56:10.353446 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 05:56:10.355653 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 05:56:10.355853 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 05:56:10.364900 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 05:56:10.365816 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 05:56:10.380259 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 05:56:10.381436 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 05:56:10.381524 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 05:56:10.381560 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 05:56:10.395510 systemd-udevd[1384]: Using default interface naming scheme 'v255'. Dec 13 05:56:10.395575 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 05:56:10.417401 augenrules[1416]: No rules Dec 13 05:56:10.419413 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 05:56:10.420992 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 05:56:10.467235 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 05:56:10.477331 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 05:56:10.560703 systemd-resolved[1383]: Positive Trust Anchors: Dec 13 05:56:10.560752 systemd-resolved[1383]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 05:56:10.560800 systemd-resolved[1383]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 05:56:10.573102 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 05:56:10.574252 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 05:56:10.576416 systemd-resolved[1383]: Using system hostname 'srv-e5p2w.gb1.brightbox.com'. Dec 13 05:56:10.584572 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 05:56:10.585792 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 05:56:10.630265 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 05:56:10.633583 systemd-networkd[1430]: lo: Link UP Dec 13 05:56:10.633594 systemd-networkd[1430]: lo: Gained carrier Dec 13 05:56:10.635462 systemd-networkd[1430]: Enumeration completed Dec 13 05:56:10.635706 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 05:56:10.636590 systemd[1]: Reached target network.target - Network. Dec 13 05:56:10.644315 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 05:56:10.646545 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1431) Dec 13 05:56:10.654191 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1431) Dec 13 05:56:10.693148 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1429) Dec 13 05:56:10.732227 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 05:56:10.757198 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 05:56:10.763134 kernel: ACPI: button: Power Button [PWRF] Dec 13 05:56:10.766722 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 05:56:10.766734 systemd-networkd[1430]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 05:56:10.769379 systemd-networkd[1430]: eth0: Link UP Dec 13 05:56:10.769393 systemd-networkd[1430]: eth0: Gained carrier Dec 13 05:56:10.769410 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 05:56:10.808278 systemd-networkd[1430]: eth0: DHCPv4 address 10.243.75.98/30, gateway 10.243.75.97 acquired from 10.243.75.97 Dec 13 05:56:10.809437 systemd-timesyncd[1399]: Network configuration changed, trying to establish connection. Dec 13 05:56:10.836698 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 05:56:10.850043 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 05:56:10.851357 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 05:56:10.877139 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 05:56:10.885600 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 05:56:10.888633 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 05:56:10.882663 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 05:56:10.928489 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 05:56:11.090974 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 05:56:11.110003 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 05:56:11.117427 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 05:56:11.134206 lvm[1466]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 05:56:11.165644 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 05:56:11.167355 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 05:56:11.168116 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 05:56:11.168984 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 05:56:11.169925 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 05:56:11.170998 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 05:56:11.171838 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 05:56:11.172612 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 05:56:11.173365 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 05:56:11.173414 systemd[1]: Reached target paths.target - Path Units. Dec 13 05:56:11.174017 systemd[1]: Reached target timers.target - Timer Units. Dec 13 05:56:11.176307 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 05:56:11.178783 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 05:56:11.183733 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 05:56:11.186081 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 05:56:11.187464 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 05:56:11.188320 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 05:56:11.188939 systemd[1]: Reached target basic.target - Basic System. Dec 13 05:56:11.189672 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 05:56:11.189717 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 05:56:11.191267 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 05:56:11.196322 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 05:56:11.206317 lvm[1471]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 05:56:11.206587 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 05:56:11.211520 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 05:56:11.219350 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 05:56:11.220872 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 05:56:11.229396 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 05:56:11.234328 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 05:56:11.240481 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 05:56:11.251367 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 05:56:11.253901 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 05:56:11.255344 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 05:56:11.264343 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 05:56:11.267283 jq[1475]: false Dec 13 05:56:11.269266 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 05:56:11.271946 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 05:56:11.276075 dbus-daemon[1474]: [system] SELinux support is enabled Dec 13 05:56:11.277814 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 05:56:11.279714 dbus-daemon[1474]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1430 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 05:56:11.288745 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 05:56:11.290201 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 05:56:11.290680 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 05:56:11.290921 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 05:56:11.299067 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 05:56:11.299229 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 05:56:11.301278 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 05:56:11.303206 dbus-daemon[1474]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 05:56:11.301311 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 05:56:11.307404 extend-filesystems[1476]: Found loop4 Dec 13 05:56:11.311254 extend-filesystems[1476]: Found loop5 Dec 13 05:56:11.311254 extend-filesystems[1476]: Found loop6 Dec 13 05:56:11.311254 extend-filesystems[1476]: Found loop7 Dec 13 05:56:11.311254 extend-filesystems[1476]: Found vda Dec 13 05:56:11.311254 extend-filesystems[1476]: Found vda1 Dec 13 05:56:11.311254 extend-filesystems[1476]: Found vda2 Dec 13 05:56:11.311254 extend-filesystems[1476]: Found vda3 Dec 13 05:56:11.311254 extend-filesystems[1476]: Found usr Dec 13 05:56:11.311254 extend-filesystems[1476]: Found vda4 Dec 13 05:56:11.311254 extend-filesystems[1476]: Found vda6 Dec 13 05:56:11.311254 extend-filesystems[1476]: Found vda7 Dec 13 05:56:11.311254 extend-filesystems[1476]: Found vda9 Dec 13 05:56:11.311254 extend-filesystems[1476]: Checking size of /dev/vda9 Dec 13 05:56:11.324297 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 05:56:11.331526 update_engine[1483]: I20241213 05:56:11.324193 1483 main.cc:92] Flatcar Update Engine starting Dec 13 05:56:11.331526 update_engine[1483]: I20241213 05:56:11.327402 1483 update_check_scheduler.cc:74] Next update check in 5m52s Dec 13 05:56:11.327371 systemd[1]: Started update-engine.service - Update Engine. Dec 13 05:56:11.337324 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 05:56:11.343089 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 05:56:11.344217 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 05:56:11.352218 jq[1485]: true Dec 13 05:56:11.353793 (ntainerd)[1502]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 05:56:11.376559 extend-filesystems[1476]: Resized partition /dev/vda9 Dec 13 05:56:11.386171 jq[1507]: true Dec 13 05:56:11.394560 extend-filesystems[1511]: resize2fs 1.47.1 (20-May-2024) Dec 13 05:56:11.418139 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Dec 13 05:56:11.452658 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 05:56:11.509555 systemd-logind[1482]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 05:56:11.510055 systemd-logind[1482]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 05:56:11.511390 systemd-logind[1482]: New seat seat0. Dec 13 05:56:11.512895 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 05:56:11.516141 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1442) Dec 13 05:56:11.648378 bash[1528]: Updated "/home/core/.ssh/authorized_keys" Dec 13 05:56:11.649750 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 05:56:11.654896 dbus-daemon[1474]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 05:56:11.661353 dbus-daemon[1474]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1494 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 05:56:11.662669 systemd[1]: Starting sshkeys.service... Dec 13 05:56:11.666071 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 05:56:11.680300 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 05:56:11.722994 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 05:56:11.732442 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 05:56:11.740808 polkitd[1537]: Started polkitd version 121 Dec 13 05:56:11.755674 polkitd[1537]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 05:56:11.755787 polkitd[1537]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 05:56:11.760145 polkitd[1537]: Finished loading, compiling and executing 2 rules Dec 13 05:56:11.763537 dbus-daemon[1474]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 05:56:11.763754 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 05:56:11.763974 polkitd[1537]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 05:56:11.773702 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 05:56:11.776252 locksmithd[1501]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 05:56:11.796997 systemd-hostnamed[1494]: Hostname set to (static) Dec 13 05:56:11.813878 extend-filesystems[1511]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 05:56:11.813878 extend-filesystems[1511]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 05:56:11.813878 extend-filesystems[1511]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 05:56:11.824638 extend-filesystems[1476]: Resized filesystem in /dev/vda9 Dec 13 05:56:11.820566 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 05:56:11.820830 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 05:56:11.882901 containerd[1502]: time="2024-12-13T05:56:11.882718105Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 05:56:11.934892 containerd[1502]: time="2024-12-13T05:56:11.934754278Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 05:56:11.942917 containerd[1502]: time="2024-12-13T05:56:11.942397343Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 05:56:11.942917 containerd[1502]: time="2024-12-13T05:56:11.942435427Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 05:56:11.942917 containerd[1502]: time="2024-12-13T05:56:11.942456645Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 05:56:11.942917 containerd[1502]: time="2024-12-13T05:56:11.942683963Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 05:56:11.942917 containerd[1502]: time="2024-12-13T05:56:11.942715281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 05:56:11.942917 containerd[1502]: time="2024-12-13T05:56:11.942809388Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 05:56:11.942917 containerd[1502]: time="2024-12-13T05:56:11.942830083Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 05:56:11.943959 containerd[1502]: time="2024-12-13T05:56:11.943344608Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 05:56:11.943959 containerd[1502]: time="2024-12-13T05:56:11.943373764Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 05:56:11.943959 containerd[1502]: time="2024-12-13T05:56:11.943393718Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 05:56:11.943959 containerd[1502]: time="2024-12-13T05:56:11.943408595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 05:56:11.943959 containerd[1502]: time="2024-12-13T05:56:11.943536074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 05:56:11.943959 containerd[1502]: time="2024-12-13T05:56:11.943912528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 05:56:11.944389 containerd[1502]: time="2024-12-13T05:56:11.944360417Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 05:56:11.944513 containerd[1502]: time="2024-12-13T05:56:11.944489578Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 05:56:11.944715 containerd[1502]: time="2024-12-13T05:56:11.944691687Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 05:56:11.944869 containerd[1502]: time="2024-12-13T05:56:11.944846257Z" level=info msg="metadata content store policy set" policy=shared Dec 13 05:56:11.948546 containerd[1502]: time="2024-12-13T05:56:11.948515808Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 05:56:11.949189 containerd[1502]: time="2024-12-13T05:56:11.948717446Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 05:56:11.949189 containerd[1502]: time="2024-12-13T05:56:11.948809320Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 05:56:11.949189 containerd[1502]: time="2024-12-13T05:56:11.948842533Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 05:56:11.949189 containerd[1502]: time="2024-12-13T05:56:11.948882289Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 05:56:11.949189 containerd[1502]: time="2024-12-13T05:56:11.949084029Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 05:56:11.949745 containerd[1502]: time="2024-12-13T05:56:11.949718182Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 05:56:11.949996 containerd[1502]: time="2024-12-13T05:56:11.949971054Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 05:56:11.950089 containerd[1502]: time="2024-12-13T05:56:11.950067495Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 05:56:11.950546 containerd[1502]: time="2024-12-13T05:56:11.950187695Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 05:56:11.951572 containerd[1502]: time="2024-12-13T05:56:11.951517291Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 05:56:11.954452 containerd[1502]: time="2024-12-13T05:56:11.954321904Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 05:56:11.954589 containerd[1502]: time="2024-12-13T05:56:11.954564124Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 05:56:11.956131 containerd[1502]: time="2024-12-13T05:56:11.954704641Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 05:56:11.956131 containerd[1502]: time="2024-12-13T05:56:11.954751402Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 05:56:11.956131 containerd[1502]: time="2024-12-13T05:56:11.954773042Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 05:56:11.956131 containerd[1502]: time="2024-12-13T05:56:11.954794911Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 05:56:11.956131 containerd[1502]: time="2024-12-13T05:56:11.954810852Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 05:56:11.956131 containerd[1502]: time="2024-12-13T05:56:11.954856802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 05:56:11.956131 containerd[1502]: time="2024-12-13T05:56:11.954882147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 05:56:11.956131 containerd[1502]: time="2024-12-13T05:56:11.954900733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 05:56:11.956131 containerd[1502]: time="2024-12-13T05:56:11.954922788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 05:56:11.956131 containerd[1502]: time="2024-12-13T05:56:11.954940897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 05:56:11.956131 containerd[1502]: time="2024-12-13T05:56:11.954958072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 05:56:11.956131 containerd[1502]: time="2024-12-13T05:56:11.954975098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 05:56:11.956131 containerd[1502]: time="2024-12-13T05:56:11.954996760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 05:56:11.956131 containerd[1502]: time="2024-12-13T05:56:11.955015581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 05:56:11.956584 containerd[1502]: time="2024-12-13T05:56:11.955035619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 05:56:11.956584 containerd[1502]: time="2024-12-13T05:56:11.955064383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 05:56:11.956584 containerd[1502]: time="2024-12-13T05:56:11.955089934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 05:56:11.956584 containerd[1502]: time="2024-12-13T05:56:11.955124771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 05:56:11.956584 containerd[1502]: time="2024-12-13T05:56:11.955189656Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 05:56:11.956584 containerd[1502]: time="2024-12-13T05:56:11.955241472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 05:56:11.956584 containerd[1502]: time="2024-12-13T05:56:11.955263409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 05:56:11.956584 containerd[1502]: time="2024-12-13T05:56:11.955281217Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 05:56:11.956584 containerd[1502]: time="2024-12-13T05:56:11.955352170Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 05:56:11.956584 containerd[1502]: time="2024-12-13T05:56:11.955382922Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 05:56:11.956584 containerd[1502]: time="2024-12-13T05:56:11.955400406Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 05:56:11.956584 containerd[1502]: time="2024-12-13T05:56:11.955425686Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 05:56:11.956584 containerd[1502]: time="2024-12-13T05:56:11.955440341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 05:56:11.957013 containerd[1502]: time="2024-12-13T05:56:11.955457534Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 05:56:11.957013 containerd[1502]: time="2024-12-13T05:56:11.955478750Z" level=info msg="NRI interface is disabled by configuration." Dec 13 05:56:11.957013 containerd[1502]: time="2024-12-13T05:56:11.955496939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 05:56:11.957137 containerd[1502]: time="2024-12-13T05:56:11.955950726Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 05:56:11.957137 containerd[1502]: time="2024-12-13T05:56:11.956044806Z" level=info msg="Connect containerd service" Dec 13 05:56:11.957497 containerd[1502]: time="2024-12-13T05:56:11.957460290Z" level=info msg="using legacy CRI server" Dec 13 05:56:11.957581 containerd[1502]: time="2024-12-13T05:56:11.957559896Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 05:56:11.957813 containerd[1502]: time="2024-12-13T05:56:11.957788838Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 05:56:11.958826 containerd[1502]: time="2024-12-13T05:56:11.958729353Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 05:56:11.959168 containerd[1502]: time="2024-12-13T05:56:11.959084831Z" level=info msg="Start subscribing containerd event" Dec 13 05:56:11.959233 containerd[1502]: time="2024-12-13T05:56:11.959187418Z" level=info msg="Start recovering state" Dec 13 05:56:11.959320 containerd[1502]: time="2024-12-13T05:56:11.959293336Z" level=info msg="Start event monitor" Dec 13 05:56:11.959364 containerd[1502]: time="2024-12-13T05:56:11.959334969Z" level=info msg="Start snapshots syncer" Dec 13 05:56:11.959364 containerd[1502]: time="2024-12-13T05:56:11.959356656Z" level=info msg="Start cni network conf syncer for default" Dec 13 05:56:11.959437 containerd[1502]: time="2024-12-13T05:56:11.959369965Z" level=info msg="Start streaming server" Dec 13 05:56:11.959960 containerd[1502]: time="2024-12-13T05:56:11.959914502Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 05:56:11.960141 containerd[1502]: time="2024-12-13T05:56:11.960091646Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 05:56:11.960375 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 05:56:11.962061 containerd[1502]: time="2024-12-13T05:56:11.962028318Z" level=info msg="containerd successfully booted in 0.081681s" Dec 13 05:56:12.040379 systemd-networkd[1430]: eth0: Gained IPv6LL Dec 13 05:56:12.042283 systemd-timesyncd[1399]: Network configuration changed, trying to establish connection. Dec 13 05:56:12.046401 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 05:56:12.048365 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 05:56:12.058504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:56:12.063117 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 05:56:12.118649 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 05:56:12.276163 sshd_keygen[1510]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 05:56:12.305885 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 05:56:12.316175 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 05:56:12.319327 systemd[1]: Started sshd@0-10.243.75.98:22-147.75.109.163:56896.service - OpenSSH per-connection server daemon (147.75.109.163:56896). Dec 13 05:56:12.327262 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 05:56:12.328084 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 05:56:12.335531 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 05:56:12.367158 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 05:56:12.380726 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 05:56:12.385322 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 05:56:12.387807 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 05:56:12.713469 systemd-timesyncd[1399]: Network configuration changed, trying to establish connection. Dec 13 05:56:12.714759 systemd-networkd[1430]: eth0: Ignoring DHCPv6 address 2a02:1348:17c:d2d8:24:19ff:fef3:4b62/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17c:d2d8:24:19ff:fef3:4b62/64 assigned by NDisc. Dec 13 05:56:12.714768 systemd-networkd[1430]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 05:56:12.931930 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:56:12.937686 (kubelet)[1596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 05:56:13.259317 sshd[1580]: Accepted publickey for core from 147.75.109.163 port 56896 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:56:13.261998 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:56:13.280358 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 05:56:13.288642 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 05:56:13.298708 systemd-logind[1482]: New session 1 of user core. Dec 13 05:56:13.317594 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 05:56:13.328698 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 05:56:13.345365 (systemd)[1603]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 05:56:13.489176 systemd[1603]: Queued start job for default target default.target. Dec 13 05:56:13.496450 systemd[1603]: Created slice app.slice - User Application Slice. Dec 13 05:56:13.496624 systemd[1603]: Reached target paths.target - Paths. Dec 13 05:56:13.496664 systemd[1603]: Reached target timers.target - Timers. Dec 13 05:56:13.500264 systemd[1603]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 05:56:13.515148 systemd[1603]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 05:56:13.515333 systemd[1603]: Reached target sockets.target - Sockets. Dec 13 05:56:13.515359 systemd[1603]: Reached target basic.target - Basic System. Dec 13 05:56:13.515444 systemd[1603]: Reached target default.target - Main User Target. Dec 13 05:56:13.515515 systemd[1603]: Startup finished in 158ms. Dec 13 05:56:13.516581 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 05:56:13.525638 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 05:56:13.546451 kubelet[1596]: E1213 05:56:13.546376 1596 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 05:56:13.549226 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 05:56:13.549477 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 05:56:14.163534 systemd[1]: Started sshd@1-10.243.75.98:22-147.75.109.163:56902.service - OpenSSH per-connection server daemon (147.75.109.163:56902). Dec 13 05:56:14.537837 systemd-timesyncd[1399]: Network configuration changed, trying to establish connection. Dec 13 05:56:15.047175 sshd[1617]: Accepted publickey for core from 147.75.109.163 port 56902 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:56:15.049273 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:56:15.057709 systemd-logind[1482]: New session 2 of user core. Dec 13 05:56:15.070418 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 05:56:15.669450 sshd[1617]: pam_unix(sshd:session): session closed for user core Dec 13 05:56:15.674072 systemd[1]: sshd@1-10.243.75.98:22-147.75.109.163:56902.service: Deactivated successfully. Dec 13 05:56:15.676243 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 05:56:15.677183 systemd-logind[1482]: Session 2 logged out. Waiting for processes to exit. Dec 13 05:56:15.678668 systemd-logind[1482]: Removed session 2. Dec 13 05:56:15.825525 systemd[1]: Started sshd@2-10.243.75.98:22-147.75.109.163:56232.service - OpenSSH per-connection server daemon (147.75.109.163:56232). Dec 13 05:56:16.725877 sshd[1626]: Accepted publickey for core from 147.75.109.163 port 56232 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:56:16.728015 sshd[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:56:16.735574 systemd-logind[1482]: New session 3 of user core. Dec 13 05:56:16.751378 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 05:56:17.343942 sshd[1626]: pam_unix(sshd:session): session closed for user core Dec 13 05:56:17.348046 systemd-logind[1482]: Session 3 logged out. Waiting for processes to exit. Dec 13 05:56:17.349260 systemd[1]: sshd@2-10.243.75.98:22-147.75.109.163:56232.service: Deactivated successfully. Dec 13 05:56:17.352009 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 05:56:17.353443 systemd-logind[1482]: Removed session 3. Dec 13 05:56:17.437859 login[1587]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 05:56:17.439716 login[1588]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 05:56:17.446909 systemd-logind[1482]: New session 5 of user core. Dec 13 05:56:17.450529 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 05:56:17.454939 systemd-logind[1482]: New session 4 of user core. Dec 13 05:56:17.464382 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 05:56:18.329600 coreos-metadata[1473]: Dec 13 05:56:18.329 WARN failed to locate config-drive, using the metadata service API instead Dec 13 05:56:18.354158 coreos-metadata[1473]: Dec 13 05:56:18.354 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 13 05:56:18.361073 coreos-metadata[1473]: Dec 13 05:56:18.361 INFO Fetch failed with 404: resource not found Dec 13 05:56:18.361073 coreos-metadata[1473]: Dec 13 05:56:18.361 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 05:56:18.361680 coreos-metadata[1473]: Dec 13 05:56:18.361 INFO Fetch successful Dec 13 05:56:18.361680 coreos-metadata[1473]: Dec 13 05:56:18.361 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 13 05:56:18.376690 coreos-metadata[1473]: Dec 13 05:56:18.376 INFO Fetch successful Dec 13 05:56:18.376690 coreos-metadata[1473]: Dec 13 05:56:18.376 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 13 05:56:18.392160 coreos-metadata[1473]: Dec 13 05:56:18.392 INFO Fetch successful Dec 13 05:56:18.392160 coreos-metadata[1473]: Dec 13 05:56:18.392 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 13 05:56:18.407492 coreos-metadata[1473]: Dec 13 05:56:18.407 INFO Fetch successful Dec 13 05:56:18.407715 coreos-metadata[1473]: Dec 13 05:56:18.407 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 13 05:56:18.427228 coreos-metadata[1473]: Dec 13 05:56:18.427 INFO Fetch successful Dec 13 05:56:18.457022 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 05:56:18.458542 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 05:56:18.805516 coreos-metadata[1540]: Dec 13 05:56:18.805 WARN failed to locate config-drive, using the metadata service API instead Dec 13 05:56:18.826926 coreos-metadata[1540]: Dec 13 05:56:18.826 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 05:56:18.852666 coreos-metadata[1540]: Dec 13 05:56:18.852 INFO Fetch successful Dec 13 05:56:18.852859 coreos-metadata[1540]: Dec 13 05:56:18.852 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 05:56:18.885820 coreos-metadata[1540]: Dec 13 05:56:18.885 INFO Fetch successful Dec 13 05:56:18.888186 unknown[1540]: wrote ssh authorized keys file for user: core Dec 13 05:56:18.912233 update-ssh-keys[1667]: Updated "/home/core/.ssh/authorized_keys" Dec 13 05:56:18.913810 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 05:56:18.915446 systemd[1]: Finished sshkeys.service. Dec 13 05:56:18.918026 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 05:56:18.918430 systemd[1]: Startup finished in 1.310s (kernel) + 12.899s (initrd) + 11.322s (userspace) = 25.532s. Dec 13 05:56:23.708386 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 05:56:23.719438 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:56:23.875245 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:56:23.887573 (kubelet)[1679]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 05:56:23.941611 kubelet[1679]: E1213 05:56:23.941540 1679 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 05:56:23.945890 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 05:56:23.946138 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 05:56:27.510514 systemd[1]: Started sshd@3-10.243.75.98:22-147.75.109.163:52846.service - OpenSSH per-connection server daemon (147.75.109.163:52846). Dec 13 05:56:28.387252 sshd[1687]: Accepted publickey for core from 147.75.109.163 port 52846 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:56:28.389031 sshd[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:56:28.395041 systemd-logind[1482]: New session 6 of user core. Dec 13 05:56:28.406324 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 05:56:29.005018 sshd[1687]: pam_unix(sshd:session): session closed for user core Dec 13 05:56:29.009644 systemd[1]: sshd@3-10.243.75.98:22-147.75.109.163:52846.service: Deactivated successfully. Dec 13 05:56:29.011674 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 05:56:29.012550 systemd-logind[1482]: Session 6 logged out. Waiting for processes to exit. Dec 13 05:56:29.014014 systemd-logind[1482]: Removed session 6. Dec 13 05:56:29.167568 systemd[1]: Started sshd@4-10.243.75.98:22-147.75.109.163:52848.service - OpenSSH per-connection server daemon (147.75.109.163:52848). Dec 13 05:56:30.047977 sshd[1694]: Accepted publickey for core from 147.75.109.163 port 52848 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:56:30.049785 sshd[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:56:30.055676 systemd-logind[1482]: New session 7 of user core. Dec 13 05:56:30.066380 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 05:56:30.661889 sshd[1694]: pam_unix(sshd:session): session closed for user core Dec 13 05:56:30.665495 systemd[1]: sshd@4-10.243.75.98:22-147.75.109.163:52848.service: Deactivated successfully. Dec 13 05:56:30.667469 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 05:56:30.669577 systemd-logind[1482]: Session 7 logged out. Waiting for processes to exit. Dec 13 05:56:30.670996 systemd-logind[1482]: Removed session 7. Dec 13 05:56:30.823536 systemd[1]: Started sshd@5-10.243.75.98:22-147.75.109.163:52850.service - OpenSSH per-connection server daemon (147.75.109.163:52850). Dec 13 05:56:31.704655 sshd[1701]: Accepted publickey for core from 147.75.109.163 port 52850 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:56:31.706888 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:56:31.715839 systemd-logind[1482]: New session 8 of user core. Dec 13 05:56:31.721329 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 05:56:32.323977 sshd[1701]: pam_unix(sshd:session): session closed for user core Dec 13 05:56:32.328097 systemd[1]: sshd@5-10.243.75.98:22-147.75.109.163:52850.service: Deactivated successfully. Dec 13 05:56:32.330475 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 05:56:32.332637 systemd-logind[1482]: Session 8 logged out. Waiting for processes to exit. Dec 13 05:56:32.334135 systemd-logind[1482]: Removed session 8. Dec 13 05:56:32.481848 systemd[1]: Started sshd@6-10.243.75.98:22-147.75.109.163:52854.service - OpenSSH per-connection server daemon (147.75.109.163:52854). Dec 13 05:56:33.374780 sshd[1708]: Accepted publickey for core from 147.75.109.163 port 52854 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:56:33.376756 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:56:33.384458 systemd-logind[1482]: New session 9 of user core. Dec 13 05:56:33.391337 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 05:56:33.862709 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 05:56:33.863183 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 05:56:33.887234 sudo[1711]: pam_unix(sudo:session): session closed for user root Dec 13 05:56:33.958355 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 05:56:33.965369 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:56:34.031186 sshd[1708]: pam_unix(sshd:session): session closed for user core Dec 13 05:56:34.037009 systemd-logind[1482]: Session 9 logged out. Waiting for processes to exit. Dec 13 05:56:34.038674 systemd[1]: sshd@6-10.243.75.98:22-147.75.109.163:52854.service: Deactivated successfully. Dec 13 05:56:34.041825 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 05:56:34.045007 systemd-logind[1482]: Removed session 9. Dec 13 05:56:34.131163 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:56:34.143600 (kubelet)[1723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 05:56:34.186764 systemd[1]: Started sshd@7-10.243.75.98:22-147.75.109.163:52864.service - OpenSSH per-connection server daemon (147.75.109.163:52864). Dec 13 05:56:34.227264 kubelet[1723]: E1213 05:56:34.227154 1723 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 05:56:34.228841 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 05:56:34.229055 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 05:56:35.081521 sshd[1729]: Accepted publickey for core from 147.75.109.163 port 52864 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:56:35.083518 sshd[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:56:35.090173 systemd-logind[1482]: New session 10 of user core. Dec 13 05:56:35.098398 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 05:56:35.559428 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 05:56:35.559876 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 05:56:35.564796 sudo[1734]: pam_unix(sudo:session): session closed for user root Dec 13 05:56:35.572409 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 05:56:35.572851 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 05:56:35.594607 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 05:56:35.597481 auditctl[1737]: No rules Dec 13 05:56:35.597959 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 05:56:35.598241 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 05:56:35.606812 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 05:56:35.638839 augenrules[1755]: No rules Dec 13 05:56:35.639684 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 05:56:35.641255 sudo[1733]: pam_unix(sudo:session): session closed for user root Dec 13 05:56:35.785458 sshd[1729]: pam_unix(sshd:session): session closed for user core Dec 13 05:56:35.789927 systemd[1]: sshd@7-10.243.75.98:22-147.75.109.163:52864.service: Deactivated successfully. Dec 13 05:56:35.791835 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 05:56:35.792643 systemd-logind[1482]: Session 10 logged out. Waiting for processes to exit. Dec 13 05:56:35.793901 systemd-logind[1482]: Removed session 10. Dec 13 05:56:35.939300 systemd[1]: Started sshd@8-10.243.75.98:22-147.75.109.163:59570.service - OpenSSH per-connection server daemon (147.75.109.163:59570). Dec 13 05:56:36.841392 sshd[1763]: Accepted publickey for core from 147.75.109.163 port 59570 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:56:36.843328 sshd[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:56:36.850096 systemd-logind[1482]: New session 11 of user core. Dec 13 05:56:36.858302 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 05:56:37.321505 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 05:56:37.321959 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 05:56:37.990335 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:56:38.003534 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:56:38.038821 systemd[1]: Reloading requested from client PID 1798 ('systemctl') (unit session-11.scope)... Dec 13 05:56:38.039041 systemd[1]: Reloading... Dec 13 05:56:38.176154 zram_generator::config[1837]: No configuration found. Dec 13 05:56:38.343180 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 05:56:38.444613 systemd[1]: Reloading finished in 404 ms. Dec 13 05:56:38.517481 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:56:38.522749 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:56:38.524880 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 05:56:38.525203 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:56:38.531529 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:56:38.662237 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:56:38.670551 (kubelet)[1906]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 05:56:38.774710 kubelet[1906]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 05:56:38.774710 kubelet[1906]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 05:56:38.774710 kubelet[1906]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 05:56:38.776177 kubelet[1906]: I1213 05:56:38.775975 1906 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 05:56:39.087430 kubelet[1906]: I1213 05:56:39.087340 1906 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 05:56:39.089161 kubelet[1906]: I1213 05:56:39.087761 1906 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 05:56:39.089161 kubelet[1906]: I1213 05:56:39.088156 1906 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 05:56:39.113450 kubelet[1906]: I1213 05:56:39.113369 1906 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 05:56:39.127019 kubelet[1906]: E1213 05:56:39.126870 1906 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 05:56:39.127019 kubelet[1906]: I1213 05:56:39.126936 1906 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 05:56:39.135244 kubelet[1906]: I1213 05:56:39.134963 1906 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 05:56:39.136650 kubelet[1906]: I1213 05:56:39.136544 1906 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 05:56:39.136869 kubelet[1906]: I1213 05:56:39.136790 1906 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 05:56:39.137250 kubelet[1906]: I1213 05:56:39.136860 1906 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.243.75.98","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 05:56:39.137480 kubelet[1906]: I1213 05:56:39.137268 1906 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 05:56:39.137480 kubelet[1906]: I1213 05:56:39.137283 1906 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 05:56:39.137480 kubelet[1906]: I1213 05:56:39.137455 1906 state_mem.go:36] "Initialized new in-memory state store" Dec 13 05:56:39.140242 kubelet[1906]: I1213 05:56:39.138937 1906 kubelet.go:408] "Attempting to sync node with API server" Dec 13 05:56:39.140242 kubelet[1906]: I1213 05:56:39.138969 1906 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 05:56:39.140242 kubelet[1906]: I1213 05:56:39.139029 1906 kubelet.go:314] "Adding apiserver pod source" Dec 13 05:56:39.140242 kubelet[1906]: I1213 05:56:39.139066 1906 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 05:56:39.140242 kubelet[1906]: E1213 05:56:39.139659 1906 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:56:39.140242 kubelet[1906]: E1213 05:56:39.139732 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:56:39.144018 kubelet[1906]: I1213 05:56:39.143994 1906 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 05:56:39.146022 kubelet[1906]: I1213 05:56:39.145999 1906 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 05:56:39.146921 kubelet[1906]: W1213 05:56:39.146890 1906 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 05:56:39.148201 kubelet[1906]: I1213 05:56:39.148181 1906 server.go:1269] "Started kubelet" Dec 13 05:56:39.148864 kubelet[1906]: I1213 05:56:39.148821 1906 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 05:56:39.150702 kubelet[1906]: I1213 05:56:39.150309 1906 server.go:460] "Adding debug handlers to kubelet server" Dec 13 05:56:39.154143 kubelet[1906]: I1213 05:56:39.153064 1906 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 05:56:39.154143 kubelet[1906]: I1213 05:56:39.153576 1906 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 05:56:39.154143 kubelet[1906]: W1213 05:56:39.154093 1906 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 05:56:39.154460 kubelet[1906]: E1213 05:56:39.154380 1906 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Dec 13 05:56:39.154633 kubelet[1906]: I1213 05:56:39.154599 1906 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 05:56:39.154776 kubelet[1906]: W1213 05:56:39.154751 1906 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.243.75.98" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 05:56:39.154904 kubelet[1906]: E1213 05:56:39.154883 1906 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.243.75.98\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Dec 13 05:56:39.156040 kubelet[1906]: I1213 05:56:39.156017 1906 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 05:56:39.161405 kubelet[1906]: I1213 05:56:39.161385 1906 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 05:56:39.161669 kubelet[1906]: I1213 05:56:39.161635 1906 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 05:56:39.161871 kubelet[1906]: I1213 05:56:39.161852 1906 reconciler.go:26] "Reconciler: start to sync state" Dec 13 05:56:39.162625 kubelet[1906]: E1213 05:56:39.162602 1906 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 05:56:39.165965 kubelet[1906]: E1213 05:56:39.163385 1906 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.243.75.98.1810a6ee581993b1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.243.75.98,UID:10.243.75.98,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.243.75.98,},FirstTimestamp:2024-12-13 05:56:39.148139441 +0000 UTC m=+0.473002242,LastTimestamp:2024-12-13 05:56:39.148139441 +0000 UTC m=+0.473002242,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.243.75.98,}" Dec 13 05:56:39.166493 kubelet[1906]: I1213 05:56:39.166470 1906 factory.go:221] Registration of the systemd container factory successfully Dec 13 05:56:39.166751 kubelet[1906]: I1213 05:56:39.166724 1906 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 05:56:39.168603 kubelet[1906]: I1213 05:56:39.168580 1906 factory.go:221] Registration of the containerd container factory successfully Dec 13 05:56:39.170279 kubelet[1906]: E1213 05:56:39.170243 1906 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.243.75.98\" not found" Dec 13 05:56:39.197470 kubelet[1906]: E1213 05:56:39.197422 1906 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.243.75.98\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Dec 13 05:56:39.199704 kubelet[1906]: I1213 05:56:39.199685 1906 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 05:56:39.199802 kubelet[1906]: I1213 05:56:39.199785 1906 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 05:56:39.199949 kubelet[1906]: I1213 05:56:39.199933 1906 state_mem.go:36] "Initialized new in-memory state store" Dec 13 05:56:39.202578 kubelet[1906]: I1213 05:56:39.202552 1906 policy_none.go:49] "None policy: Start" Dec 13 05:56:39.207148 kubelet[1906]: I1213 05:56:39.206150 1906 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 05:56:39.207148 kubelet[1906]: I1213 05:56:39.206186 1906 state_mem.go:35] "Initializing new in-memory state store" Dec 13 05:56:39.227607 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 05:56:39.248309 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 05:56:39.252716 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 05:56:39.264193 kubelet[1906]: I1213 05:56:39.263332 1906 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 05:56:39.264193 kubelet[1906]: I1213 05:56:39.263554 1906 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 05:56:39.264193 kubelet[1906]: I1213 05:56:39.263801 1906 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 05:56:39.264193 kubelet[1906]: I1213 05:56:39.263822 1906 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 05:56:39.265206 kubelet[1906]: I1213 05:56:39.264799 1906 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 05:56:39.265206 kubelet[1906]: I1213 05:56:39.264841 1906 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 05:56:39.265206 kubelet[1906]: I1213 05:56:39.264883 1906 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 05:56:39.265206 kubelet[1906]: E1213 05:56:39.265010 1906 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 05:56:39.267852 kubelet[1906]: I1213 05:56:39.267827 1906 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 05:56:39.275243 kubelet[1906]: E1213 05:56:39.275217 1906 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.243.75.98\" not found" Dec 13 05:56:39.365724 kubelet[1906]: I1213 05:56:39.365555 1906 kubelet_node_status.go:72] "Attempting to register node" node="10.243.75.98" Dec 13 05:56:39.371535 kubelet[1906]: I1213 05:56:39.371510 1906 kubelet_node_status.go:75] "Successfully registered node" node="10.243.75.98" Dec 13 05:56:39.371640 kubelet[1906]: E1213 05:56:39.371542 1906 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.243.75.98\": node \"10.243.75.98\" not found" Dec 13 05:56:39.381235 kubelet[1906]: E1213 05:56:39.381189 1906 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.243.75.98\" not found" Dec 13 05:56:39.481835 kubelet[1906]: E1213 05:56:39.481759 1906 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.243.75.98\" not found" Dec 13 05:56:39.582305 kubelet[1906]: E1213 05:56:39.582225 1906 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.243.75.98\" not found" Dec 13 05:56:39.682932 kubelet[1906]: E1213 05:56:39.682739 1906 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.243.75.98\" not found" Dec 13 05:56:39.783659 kubelet[1906]: E1213 05:56:39.783599 1906 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.243.75.98\" not found" Dec 13 05:56:39.884609 kubelet[1906]: E1213 05:56:39.884544 1906 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.243.75.98\" not found" Dec 13 05:56:39.985518 kubelet[1906]: E1213 05:56:39.985444 1906 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.243.75.98\" not found" Dec 13 05:56:40.086589 kubelet[1906]: E1213 05:56:40.086541 1906 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.243.75.98\" not found" Dec 13 05:56:40.090848 kubelet[1906]: I1213 05:56:40.090794 1906 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 05:56:40.091128 kubelet[1906]: W1213 05:56:40.091043 1906 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 05:56:40.091128 kubelet[1906]: W1213 05:56:40.091043 1906 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 05:56:40.140375 kubelet[1906]: E1213 05:56:40.140317 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:56:40.186658 kubelet[1906]: E1213 05:56:40.186615 1906 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.243.75.98\" not found" Dec 13 05:56:40.218867 sudo[1766]: pam_unix(sudo:session): session closed for user root Dec 13 05:56:40.287401 kubelet[1906]: E1213 05:56:40.287228 1906 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.243.75.98\" not found" Dec 13 05:56:40.363534 sshd[1763]: pam_unix(sshd:session): session closed for user core Dec 13 05:56:40.368653 systemd[1]: sshd@8-10.243.75.98:22-147.75.109.163:59570.service: Deactivated successfully. Dec 13 05:56:40.370686 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 05:56:40.371827 systemd-logind[1482]: Session 11 logged out. Waiting for processes to exit. Dec 13 05:56:40.373682 systemd-logind[1482]: Removed session 11. Dec 13 05:56:40.388211 kubelet[1906]: E1213 05:56:40.388149 1906 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.243.75.98\" not found" Dec 13 05:56:40.488913 kubelet[1906]: E1213 05:56:40.488837 1906 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.243.75.98\" not found" Dec 13 05:56:40.589876 kubelet[1906]: E1213 05:56:40.589715 1906 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.243.75.98\" not found" Dec 13 05:56:40.691272 kubelet[1906]: I1213 05:56:40.691212 1906 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 05:56:40.691930 containerd[1502]: time="2024-12-13T05:56:40.691728680Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 05:56:40.692476 kubelet[1906]: I1213 05:56:40.692095 1906 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 05:56:41.140822 kubelet[1906]: I1213 05:56:41.140728 1906 apiserver.go:52] "Watching apiserver" Dec 13 05:56:41.141445 kubelet[1906]: E1213 05:56:41.141068 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:56:41.148584 kubelet[1906]: E1213 05:56:41.147556 1906 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnc5j" podUID="434925fb-a29e-456c-8c09-f83da1b81015" Dec 13 05:56:41.154975 systemd[1]: Created slice kubepods-besteffort-podd2cdf610_a20b_40cb_a46a_bafca935f1ef.slice - libcontainer container kubepods-besteffort-podd2cdf610_a20b_40cb_a46a_bafca935f1ef.slice. Dec 13 05:56:41.167173 kubelet[1906]: I1213 05:56:41.166174 1906 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 05:56:41.173787 kubelet[1906]: I1213 05:56:41.173673 1906 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4ltt\" (UniqueName: \"kubernetes.io/projected/3378e186-b0b3-4c2d-9d6c-6d732bfcfe48-kube-api-access-g4ltt\") pod \"calico-node-fnmxm\" (UID: \"3378e186-b0b3-4c2d-9d6c-6d732bfcfe48\") " pod="calico-system/calico-node-fnmxm" Dec 13 05:56:41.173787 kubelet[1906]: I1213 05:56:41.173721 1906 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d2cdf610-a20b-40cb-a46a-bafca935f1ef-kube-proxy\") pod \"kube-proxy-8ltml\" (UID: \"d2cdf610-a20b-40cb-a46a-bafca935f1ef\") " pod="kube-system/kube-proxy-8ltml" Dec 13 05:56:41.173787 kubelet[1906]: I1213 05:56:41.173749 1906 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3378e186-b0b3-4c2d-9d6c-6d732bfcfe48-lib-modules\") pod \"calico-node-fnmxm\" (UID: \"3378e186-b0b3-4c2d-9d6c-6d732bfcfe48\") " pod="calico-system/calico-node-fnmxm" Dec 13 05:56:41.173787 kubelet[1906]: I1213 05:56:41.173774 1906 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3378e186-b0b3-4c2d-9d6c-6d732bfcfe48-xtables-lock\") pod \"calico-node-fnmxm\" (UID: \"3378e186-b0b3-4c2d-9d6c-6d732bfcfe48\") " pod="calico-system/calico-node-fnmxm" Dec 13 05:56:41.174042 kubelet[1906]: I1213 05:56:41.173797 1906 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3378e186-b0b3-4c2d-9d6c-6d732bfcfe48-tigera-ca-bundle\") pod \"calico-node-fnmxm\" (UID: \"3378e186-b0b3-4c2d-9d6c-6d732bfcfe48\") " pod="calico-system/calico-node-fnmxm" Dec 13 05:56:41.174042 kubelet[1906]: I1213 05:56:41.173829 1906 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3378e186-b0b3-4c2d-9d6c-6d732bfcfe48-node-certs\") pod \"calico-node-fnmxm\" (UID: \"3378e186-b0b3-4c2d-9d6c-6d732bfcfe48\") " pod="calico-system/calico-node-fnmxm" Dec 13 05:56:41.174042 kubelet[1906]: I1213 05:56:41.173852 1906 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3378e186-b0b3-4c2d-9d6c-6d732bfcfe48-var-lib-calico\") pod \"calico-node-fnmxm\" (UID: \"3378e186-b0b3-4c2d-9d6c-6d732bfcfe48\") " pod="calico-system/calico-node-fnmxm" Dec 13 05:56:41.174042 kubelet[1906]: I1213 05:56:41.173874 1906 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3378e186-b0b3-4c2d-9d6c-6d732bfcfe48-flexvol-driver-host\") pod \"calico-node-fnmxm\" (UID: \"3378e186-b0b3-4c2d-9d6c-6d732bfcfe48\") " pod="calico-system/calico-node-fnmxm" Dec 13 05:56:41.174042 kubelet[1906]: I1213 05:56:41.173898 1906 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3378e186-b0b3-4c2d-9d6c-6d732bfcfe48-cni-net-dir\") pod \"calico-node-fnmxm\" (UID: \"3378e186-b0b3-4c2d-9d6c-6d732bfcfe48\") " pod="calico-system/calico-node-fnmxm" Dec 13 05:56:41.175189 kubelet[1906]: I1213 05:56:41.173930 1906 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/434925fb-a29e-456c-8c09-f83da1b81015-varrun\") pod \"csi-node-driver-cnc5j\" (UID: \"434925fb-a29e-456c-8c09-f83da1b81015\") " pod="calico-system/csi-node-driver-cnc5j" Dec 13 05:56:41.175189 kubelet[1906]: I1213 05:56:41.173966 1906 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/434925fb-a29e-456c-8c09-f83da1b81015-socket-dir\") pod \"csi-node-driver-cnc5j\" (UID: \"434925fb-a29e-456c-8c09-f83da1b81015\") " pod="calico-system/csi-node-driver-cnc5j" Dec 13 05:56:41.175189 kubelet[1906]: I1213 05:56:41.173988 1906 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2cdf610-a20b-40cb-a46a-bafca935f1ef-xtables-lock\") pod \"kube-proxy-8ltml\" (UID: \"d2cdf610-a20b-40cb-a46a-bafca935f1ef\") " pod="kube-system/kube-proxy-8ltml" Dec 13 05:56:41.175189 kubelet[1906]: I1213 05:56:41.174011 1906 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t58vv\" (UniqueName: \"kubernetes.io/projected/d2cdf610-a20b-40cb-a46a-bafca935f1ef-kube-api-access-t58vv\") pod \"kube-proxy-8ltml\" (UID: \"d2cdf610-a20b-40cb-a46a-bafca935f1ef\") " pod="kube-system/kube-proxy-8ltml" Dec 13 05:56:41.175189 kubelet[1906]: I1213 05:56:41.174060 1906 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2cdf610-a20b-40cb-a46a-bafca935f1ef-lib-modules\") pod \"kube-proxy-8ltml\" (UID: \"d2cdf610-a20b-40cb-a46a-bafca935f1ef\") " pod="kube-system/kube-proxy-8ltml" Dec 13 05:56:41.175379 kubelet[1906]: I1213 05:56:41.174086 1906 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3378e186-b0b3-4c2d-9d6c-6d732bfcfe48-policysync\") pod \"calico-node-fnmxm\" (UID: \"3378e186-b0b3-4c2d-9d6c-6d732bfcfe48\") " pod="calico-system/calico-node-fnmxm" Dec 13 05:56:41.175379 kubelet[1906]: I1213 05:56:41.174127 1906 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3378e186-b0b3-4c2d-9d6c-6d732bfcfe48-var-run-calico\") pod \"calico-node-fnmxm\" (UID: \"3378e186-b0b3-4c2d-9d6c-6d732bfcfe48\") " pod="calico-system/calico-node-fnmxm" Dec 13 05:56:41.175379 kubelet[1906]: I1213 05:56:41.174153 1906 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3378e186-b0b3-4c2d-9d6c-6d732bfcfe48-cni-bin-dir\") pod \"calico-node-fnmxm\" (UID: \"3378e186-b0b3-4c2d-9d6c-6d732bfcfe48\") " pod="calico-system/calico-node-fnmxm" Dec 13 05:56:41.175379 kubelet[1906]: I1213 05:56:41.174180 1906 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3378e186-b0b3-4c2d-9d6c-6d732bfcfe48-cni-log-dir\") pod \"calico-node-fnmxm\" (UID: \"3378e186-b0b3-4c2d-9d6c-6d732bfcfe48\") " pod="calico-system/calico-node-fnmxm" Dec 13 05:56:41.175379 kubelet[1906]: I1213 05:56:41.174202 1906 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/434925fb-a29e-456c-8c09-f83da1b81015-kubelet-dir\") pod \"csi-node-driver-cnc5j\" (UID: \"434925fb-a29e-456c-8c09-f83da1b81015\") " pod="calico-system/csi-node-driver-cnc5j" Dec 13 05:56:41.175585 kubelet[1906]: I1213 05:56:41.174226 1906 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/434925fb-a29e-456c-8c09-f83da1b81015-registration-dir\") pod \"csi-node-driver-cnc5j\" (UID: \"434925fb-a29e-456c-8c09-f83da1b81015\") " pod="calico-system/csi-node-driver-cnc5j" Dec 13 05:56:41.175585 kubelet[1906]: I1213 05:56:41.174252 1906 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnzk6\" (UniqueName: \"kubernetes.io/projected/434925fb-a29e-456c-8c09-f83da1b81015-kube-api-access-vnzk6\") pod \"csi-node-driver-cnc5j\" (UID: \"434925fb-a29e-456c-8c09-f83da1b81015\") " pod="calico-system/csi-node-driver-cnc5j" Dec 13 05:56:41.175714 systemd[1]: Created slice kubepods-besteffort-pod3378e186_b0b3_4c2d_9d6c_6d732bfcfe48.slice - libcontainer container kubepods-besteffort-pod3378e186_b0b3_4c2d_9d6c_6d732bfcfe48.slice. Dec 13 05:56:41.279256 kubelet[1906]: E1213 05:56:41.279039 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:41.279256 kubelet[1906]: W1213 05:56:41.279077 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:41.279256 kubelet[1906]: E1213 05:56:41.279150 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:41.279960 kubelet[1906]: E1213 05:56:41.279374 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:41.279960 kubelet[1906]: W1213 05:56:41.279387 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:41.279960 kubelet[1906]: E1213 05:56:41.279400 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:41.279960 kubelet[1906]: E1213 05:56:41.279766 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:41.279960 kubelet[1906]: W1213 05:56:41.279779 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:41.279960 kubelet[1906]: E1213 05:56:41.279817 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:41.280592 kubelet[1906]: E1213 05:56:41.280160 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:41.280592 kubelet[1906]: W1213 05:56:41.280173 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:41.280592 kubelet[1906]: E1213 05:56:41.280205 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:41.280592 kubelet[1906]: E1213 05:56:41.280482 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:41.280592 kubelet[1906]: W1213 05:56:41.280497 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:41.280592 kubelet[1906]: E1213 05:56:41.280511 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:41.281020 kubelet[1906]: E1213 05:56:41.280754 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:41.281020 kubelet[1906]: W1213 05:56:41.280766 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:41.281020 kubelet[1906]: E1213 05:56:41.280778 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:41.281020 kubelet[1906]: E1213 05:56:41.281005 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:41.281020 kubelet[1906]: W1213 05:56:41.281017 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:41.281598 kubelet[1906]: E1213 05:56:41.281030 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:41.281598 kubelet[1906]: E1213 05:56:41.281383 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:41.281598 kubelet[1906]: W1213 05:56:41.281405 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:41.281598 kubelet[1906]: E1213 05:56:41.281418 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:41.291558 kubelet[1906]: E1213 05:56:41.290506 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:41.291558 kubelet[1906]: W1213 05:56:41.290530 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:41.291558 kubelet[1906]: E1213 05:56:41.290548 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:41.299383 kubelet[1906]: E1213 05:56:41.299358 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:41.299383 kubelet[1906]: W1213 05:56:41.299380 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:41.299522 kubelet[1906]: E1213 05:56:41.299398 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:41.305253 kubelet[1906]: E1213 05:56:41.304857 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:41.305253 kubelet[1906]: W1213 05:56:41.304877 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:41.305253 kubelet[1906]: E1213 05:56:41.304900 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:41.306265 kubelet[1906]: E1213 05:56:41.306225 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:41.306265 kubelet[1906]: W1213 05:56:41.306240 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:41.309236 kubelet[1906]: E1213 05:56:41.306254 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:41.472556 containerd[1502]: time="2024-12-13T05:56:41.472510970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8ltml,Uid:d2cdf610-a20b-40cb-a46a-bafca935f1ef,Namespace:kube-system,Attempt:0,}" Dec 13 05:56:41.479954 containerd[1502]: time="2024-12-13T05:56:41.479829453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fnmxm,Uid:3378e186-b0b3-4c2d-9d6c-6d732bfcfe48,Namespace:calico-system,Attempt:0,}" Dec 13 05:56:42.141277 kubelet[1906]: E1213 05:56:42.141214 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:56:42.310680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount329814455.mount: Deactivated successfully. Dec 13 05:56:42.319518 containerd[1502]: time="2024-12-13T05:56:42.318171045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 05:56:42.320044 containerd[1502]: time="2024-12-13T05:56:42.319552386Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Dec 13 05:56:42.320044 containerd[1502]: time="2024-12-13T05:56:42.319630826Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 05:56:42.337276 containerd[1502]: time="2024-12-13T05:56:42.337233446Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 05:56:42.338557 containerd[1502]: time="2024-12-13T05:56:42.338510363Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 05:56:42.342259 containerd[1502]: time="2024-12-13T05:56:42.342185929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 05:56:42.344133 containerd[1502]: time="2024-12-13T05:56:42.343497074Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 863.592064ms" Dec 13 05:56:42.347292 containerd[1502]: time="2024-12-13T05:56:42.347253643Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 874.526909ms" Dec 13 05:56:42.497810 containerd[1502]: time="2024-12-13T05:56:42.496726182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:56:42.498041 containerd[1502]: time="2024-12-13T05:56:42.497874751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:56:42.498041 containerd[1502]: time="2024-12-13T05:56:42.497895120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:56:42.498218 containerd[1502]: time="2024-12-13T05:56:42.498193553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:56:42.517793 containerd[1502]: time="2024-12-13T05:56:42.517674162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:56:42.519163 containerd[1502]: time="2024-12-13T05:56:42.519052264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:56:42.519163 containerd[1502]: time="2024-12-13T05:56:42.519091948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:56:42.519721 containerd[1502]: time="2024-12-13T05:56:42.519653618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:56:42.607303 systemd[1]: Started cri-containerd-2ee81ec1825f3ecbd906f18b858d380283c7bf3f80e1dd33a3de12f9457643cf.scope - libcontainer container 2ee81ec1825f3ecbd906f18b858d380283c7bf3f80e1dd33a3de12f9457643cf. Dec 13 05:56:42.613880 systemd[1]: Started cri-containerd-070c02faa536fa7c73ab708cdb69e417265913b4109f2bd4749390bcdbe7d129.scope - libcontainer container 070c02faa536fa7c73ab708cdb69e417265913b4109f2bd4749390bcdbe7d129. Dec 13 05:56:42.659219 containerd[1502]: time="2024-12-13T05:56:42.659026847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8ltml,Uid:d2cdf610-a20b-40cb-a46a-bafca935f1ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ee81ec1825f3ecbd906f18b858d380283c7bf3f80e1dd33a3de12f9457643cf\"" Dec 13 05:56:42.664612 containerd[1502]: time="2024-12-13T05:56:42.664427935Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 05:56:42.665758 containerd[1502]: time="2024-12-13T05:56:42.665727096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fnmxm,Uid:3378e186-b0b3-4c2d-9d6c-6d732bfcfe48,Namespace:calico-system,Attempt:0,} returns sandbox id \"070c02faa536fa7c73ab708cdb69e417265913b4109f2bd4749390bcdbe7d129\"" Dec 13 05:56:42.753020 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 05:56:43.142374 kubelet[1906]: E1213 05:56:43.141736 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:56:43.270436 kubelet[1906]: E1213 05:56:43.269936 1906 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnc5j" podUID="434925fb-a29e-456c-8c09-f83da1b81015" Dec 13 05:56:44.048447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3973487602.mount: Deactivated successfully. Dec 13 05:56:44.142776 kubelet[1906]: E1213 05:56:44.142669 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:56:44.743807 containerd[1502]: time="2024-12-13T05:56:44.743710039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:56:44.744960 containerd[1502]: time="2024-12-13T05:56:44.744912202Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230251" Dec 13 05:56:44.745750 containerd[1502]: time="2024-12-13T05:56:44.745713849Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:56:44.750132 containerd[1502]: time="2024-12-13T05:56:44.749209409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:56:44.751607 containerd[1502]: time="2024-12-13T05:56:44.751573790Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 2.086970022s" Dec 13 05:56:44.751784 containerd[1502]: time="2024-12-13T05:56:44.751742131Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 05:56:44.754451 containerd[1502]: time="2024-12-13T05:56:44.754420942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 05:56:44.755729 containerd[1502]: time="2024-12-13T05:56:44.755696669Z" level=info msg="CreateContainer within sandbox \"2ee81ec1825f3ecbd906f18b858d380283c7bf3f80e1dd33a3de12f9457643cf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 05:56:44.774726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1892191262.mount: Deactivated successfully. Dec 13 05:56:44.777140 containerd[1502]: time="2024-12-13T05:56:44.776951907Z" level=info msg="CreateContainer within sandbox \"2ee81ec1825f3ecbd906f18b858d380283c7bf3f80e1dd33a3de12f9457643cf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"222485691fd4850d73c326c328369d65092e6c578ecff7db4d353c1283edc6b4\"" Dec 13 05:56:44.778343 containerd[1502]: time="2024-12-13T05:56:44.778315948Z" level=info msg="StartContainer for \"222485691fd4850d73c326c328369d65092e6c578ecff7db4d353c1283edc6b4\"" Dec 13 05:56:44.834661 systemd[1]: Started cri-containerd-222485691fd4850d73c326c328369d65092e6c578ecff7db4d353c1283edc6b4.scope - libcontainer container 222485691fd4850d73c326c328369d65092e6c578ecff7db4d353c1283edc6b4. Dec 13 05:56:44.883293 containerd[1502]: time="2024-12-13T05:56:44.883007921Z" level=info msg="StartContainer for \"222485691fd4850d73c326c328369d65092e6c578ecff7db4d353c1283edc6b4\" returns successfully" Dec 13 05:56:45.603150 systemd-resolved[1383]: Clock change detected. Flushing caches. Dec 13 05:56:45.603558 systemd-timesyncd[1399]: Contacted time server [2a01:7e00::f03c:94ff:fe24:f68b]:123 (2.flatcar.pool.ntp.org). Dec 13 05:56:45.603641 systemd-timesyncd[1399]: Initial clock synchronization to Fri 2024-12-13 05:56:45.602892 UTC. Dec 13 05:56:45.756514 kubelet[1906]: E1213 05:56:45.756355 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:56:45.881131 kubelet[1906]: E1213 05:56:45.879580 1906 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnc5j" podUID="434925fb-a29e-456c-8c09-f83da1b81015" Dec 13 05:56:46.005626 kubelet[1906]: E1213 05:56:46.005580 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.005855 kubelet[1906]: W1213 05:56:46.005826 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.006038 kubelet[1906]: E1213 05:56:46.005991 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.006703 kubelet[1906]: E1213 05:56:46.006476 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.006703 kubelet[1906]: W1213 05:56:46.006494 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.006703 kubelet[1906]: E1213 05:56:46.006509 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.007065 kubelet[1906]: E1213 05:56:46.006947 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.007065 kubelet[1906]: W1213 05:56:46.006978 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.007065 kubelet[1906]: E1213 05:56:46.006994 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.007836 kubelet[1906]: E1213 05:56:46.007631 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.007836 kubelet[1906]: W1213 05:56:46.007648 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.007836 kubelet[1906]: E1213 05:56:46.007663 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.008187 kubelet[1906]: E1213 05:56:46.008061 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.008187 kubelet[1906]: W1213 05:56:46.008113 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.008187 kubelet[1906]: E1213 05:56:46.008131 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.008795 kubelet[1906]: E1213 05:56:46.008650 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.008795 kubelet[1906]: W1213 05:56:46.008667 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.008795 kubelet[1906]: E1213 05:56:46.008682 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.009401 kubelet[1906]: E1213 05:56:46.009225 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.009401 kubelet[1906]: W1213 05:56:46.009242 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.009401 kubelet[1906]: E1213 05:56:46.009256 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.009808 kubelet[1906]: E1213 05:56:46.009654 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.009808 kubelet[1906]: W1213 05:56:46.009681 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.009808 kubelet[1906]: E1213 05:56:46.009697 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.010124 kubelet[1906]: E1213 05:56:46.010060 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.010277 kubelet[1906]: W1213 05:56:46.010205 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.010277 kubelet[1906]: E1213 05:56:46.010227 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.010845 kubelet[1906]: E1213 05:56:46.010685 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.010845 kubelet[1906]: W1213 05:56:46.010701 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.010845 kubelet[1906]: E1213 05:56:46.010715 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.011193 kubelet[1906]: E1213 05:56:46.011106 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.011193 kubelet[1906]: W1213 05:56:46.011124 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.011193 kubelet[1906]: E1213 05:56:46.011138 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.011578 kubelet[1906]: E1213 05:56:46.011561 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.011770 kubelet[1906]: W1213 05:56:46.011660 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.011770 kubelet[1906]: E1213 05:56:46.011679 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.012373 kubelet[1906]: E1213 05:56:46.012229 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.012373 kubelet[1906]: W1213 05:56:46.012246 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.012373 kubelet[1906]: E1213 05:56:46.012259 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.012843 kubelet[1906]: E1213 05:56:46.012665 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.012843 kubelet[1906]: W1213 05:56:46.012682 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.012843 kubelet[1906]: E1213 05:56:46.012696 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.013075 kubelet[1906]: E1213 05:56:46.013058 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.013282 kubelet[1906]: W1213 05:56:46.013193 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.013282 kubelet[1906]: E1213 05:56:46.013216 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.013812 kubelet[1906]: E1213 05:56:46.013673 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.013812 kubelet[1906]: W1213 05:56:46.013692 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.013812 kubelet[1906]: E1213 05:56:46.013705 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.014375 kubelet[1906]: E1213 05:56:46.014219 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.014375 kubelet[1906]: W1213 05:56:46.014235 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.014375 kubelet[1906]: E1213 05:56:46.014248 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.014623 kubelet[1906]: E1213 05:56:46.014606 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.014769 kubelet[1906]: W1213 05:56:46.014693 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.014769 kubelet[1906]: E1213 05:56:46.014715 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.015279 kubelet[1906]: E1213 05:56:46.015135 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.015279 kubelet[1906]: W1213 05:56:46.015151 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.015279 kubelet[1906]: E1213 05:56:46.015165 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.015593 kubelet[1906]: E1213 05:56:46.015514 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.015593 kubelet[1906]: W1213 05:56:46.015529 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.015593 kubelet[1906]: E1213 05:56:46.015542 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.029046 kubelet[1906]: E1213 05:56:46.028905 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.029046 kubelet[1906]: W1213 05:56:46.028932 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.029046 kubelet[1906]: E1213 05:56:46.028951 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.029400 kubelet[1906]: E1213 05:56:46.029270 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.029400 kubelet[1906]: W1213 05:56:46.029284 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.029400 kubelet[1906]: E1213 05:56:46.029316 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.029639 kubelet[1906]: E1213 05:56:46.029610 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.029639 kubelet[1906]: W1213 05:56:46.029633 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.029796 kubelet[1906]: E1213 05:56:46.029655 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.029944 kubelet[1906]: E1213 05:56:46.029915 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.029944 kubelet[1906]: W1213 05:56:46.029937 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.030049 kubelet[1906]: E1213 05:56:46.029959 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.030217 kubelet[1906]: E1213 05:56:46.030198 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.030217 kubelet[1906]: W1213 05:56:46.030217 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.030373 kubelet[1906]: E1213 05:56:46.030260 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.030635 kubelet[1906]: E1213 05:56:46.030603 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.030635 kubelet[1906]: W1213 05:56:46.030623 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.030750 kubelet[1906]: E1213 05:56:46.030645 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.031237 kubelet[1906]: E1213 05:56:46.031062 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.031237 kubelet[1906]: W1213 05:56:46.031098 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.031237 kubelet[1906]: E1213 05:56:46.031130 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.031624 kubelet[1906]: E1213 05:56:46.031483 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.031624 kubelet[1906]: W1213 05:56:46.031500 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.031624 kubelet[1906]: E1213 05:56:46.031522 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.031836 kubelet[1906]: E1213 05:56:46.031819 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.031944 kubelet[1906]: W1213 05:56:46.031924 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.032213 kubelet[1906]: E1213 05:56:46.032037 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.032499 kubelet[1906]: E1213 05:56:46.032370 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.032499 kubelet[1906]: W1213 05:56:46.032387 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.032499 kubelet[1906]: E1213 05:56:46.032439 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.032807 kubelet[1906]: E1213 05:56:46.032789 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.032977 kubelet[1906]: W1213 05:56:46.032892 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.032977 kubelet[1906]: E1213 05:56:46.032927 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.033234 kubelet[1906]: E1213 05:56:46.033202 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:56:46.033234 kubelet[1906]: W1213 05:56:46.033227 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:56:46.033337 kubelet[1906]: E1213 05:56:46.033243 1906 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:56:46.694574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1345357891.mount: Deactivated successfully. Dec 13 05:56:46.757003 kubelet[1906]: E1213 05:56:46.756582 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:56:46.832983 containerd[1502]: time="2024-12-13T05:56:46.831933360Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:56:46.832983 containerd[1502]: time="2024-12-13T05:56:46.832933958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Dec 13 05:56:46.833736 containerd[1502]: time="2024-12-13T05:56:46.833704364Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:56:46.836216 containerd[1502]: time="2024-12-13T05:56:46.836172924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:56:46.837393 containerd[1502]: time="2024-12-13T05:56:46.837356576Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.469207406s" Dec 13 05:56:46.837515 containerd[1502]: time="2024-12-13T05:56:46.837475921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 05:56:46.840667 containerd[1502]: time="2024-12-13T05:56:46.840633332Z" level=info msg="CreateContainer within sandbox \"070c02faa536fa7c73ab708cdb69e417265913b4109f2bd4749390bcdbe7d129\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 05:56:46.865963 containerd[1502]: time="2024-12-13T05:56:46.865043610Z" level=info msg="CreateContainer within sandbox \"070c02faa536fa7c73ab708cdb69e417265913b4109f2bd4749390bcdbe7d129\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bcd8a3e3819af304b8d0171c17d8b18a2323de5aa7b4c3c050cf005e0aabe86f\"" Dec 13 05:56:46.865764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2436281371.mount: Deactivated successfully. Dec 13 05:56:46.867133 containerd[1502]: time="2024-12-13T05:56:46.867075243Z" level=info msg="StartContainer for \"bcd8a3e3819af304b8d0171c17d8b18a2323de5aa7b4c3c050cf005e0aabe86f\"" Dec 13 05:56:46.907368 systemd[1]: Started cri-containerd-bcd8a3e3819af304b8d0171c17d8b18a2323de5aa7b4c3c050cf005e0aabe86f.scope - libcontainer container bcd8a3e3819af304b8d0171c17d8b18a2323de5aa7b4c3c050cf005e0aabe86f. Dec 13 05:56:46.950376 containerd[1502]: time="2024-12-13T05:56:46.949284013Z" level=info msg="StartContainer for \"bcd8a3e3819af304b8d0171c17d8b18a2323de5aa7b4c3c050cf005e0aabe86f\" returns successfully" Dec 13 05:56:46.966283 systemd[1]: cri-containerd-bcd8a3e3819af304b8d0171c17d8b18a2323de5aa7b4c3c050cf005e0aabe86f.scope: Deactivated successfully. Dec 13 05:56:47.291811 containerd[1502]: time="2024-12-13T05:56:47.291570791Z" level=info msg="shim disconnected" id=bcd8a3e3819af304b8d0171c17d8b18a2323de5aa7b4c3c050cf005e0aabe86f namespace=k8s.io Dec 13 05:56:47.291811 containerd[1502]: time="2024-12-13T05:56:47.291700744Z" level=warning msg="cleaning up after shim disconnected" id=bcd8a3e3819af304b8d0171c17d8b18a2323de5aa7b4c3c050cf005e0aabe86f namespace=k8s.io Dec 13 05:56:47.291811 containerd[1502]: time="2024-12-13T05:56:47.291752113Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 05:56:47.630016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcd8a3e3819af304b8d0171c17d8b18a2323de5aa7b4c3c050cf005e0aabe86f-rootfs.mount: Deactivated successfully. Dec 13 05:56:47.757756 kubelet[1906]: E1213 05:56:47.757682 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:56:47.878930 kubelet[1906]: E1213 05:56:47.878515 1906 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnc5j" podUID="434925fb-a29e-456c-8c09-f83da1b81015" Dec 13 05:56:47.930048 containerd[1502]: time="2024-12-13T05:56:47.929632383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 05:56:47.947410 kubelet[1906]: I1213 05:56:47.947336 1906 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8ltml" podStartSLOduration=5.856651576 podStartE2EDuration="7.947299029s" podCreationTimestamp="2024-12-13 05:56:40 +0000 UTC" firstStartedPulling="2024-12-13 05:56:42.662658783 +0000 UTC m=+3.987521581" lastFinishedPulling="2024-12-13 05:56:44.753306227 +0000 UTC m=+6.078169034" observedRunningTime="2024-12-13 05:56:45.933130627 +0000 UTC m=+6.645016171" watchObservedRunningTime="2024-12-13 05:56:47.947299029 +0000 UTC m=+8.659184561" Dec 13 05:56:48.758398 kubelet[1906]: E1213 05:56:48.758342 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:56:49.759256 kubelet[1906]: E1213 05:56:49.759075 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:56:49.883187 kubelet[1906]: E1213 05:56:49.881038 1906 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnc5j" podUID="434925fb-a29e-456c-8c09-f83da1b81015" Dec 13 05:56:50.761052 kubelet[1906]: E1213 05:56:50.760920 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:56:51.762011 kubelet[1906]: E1213 05:56:51.761951 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:56:51.880255 kubelet[1906]: E1213 05:56:51.879610 1906 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnc5j" podUID="434925fb-a29e-456c-8c09-f83da1b81015" Dec 13 05:56:52.425756 containerd[1502]: time="2024-12-13T05:56:52.425663766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:56:52.426924 containerd[1502]: time="2024-12-13T05:56:52.426583220Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 05:56:52.428400 containerd[1502]: time="2024-12-13T05:56:52.428323203Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:56:52.431497 containerd[1502]: time="2024-12-13T05:56:52.431465560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:56:52.433109 containerd[1502]: time="2024-12-13T05:56:52.432802040Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.503101476s" Dec 13 05:56:52.433109 containerd[1502]: time="2024-12-13T05:56:52.432857418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 05:56:52.436691 containerd[1502]: time="2024-12-13T05:56:52.436643152Z" level=info msg="CreateContainer within sandbox \"070c02faa536fa7c73ab708cdb69e417265913b4109f2bd4749390bcdbe7d129\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 05:56:52.454987 containerd[1502]: time="2024-12-13T05:56:52.454914083Z" level=info msg="CreateContainer within sandbox \"070c02faa536fa7c73ab708cdb69e417265913b4109f2bd4749390bcdbe7d129\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"59a49ae6df7827f814fc14d6cc42fe4486ef7691db38e390bf05c8d6b9aaa70b\"" Dec 13 05:56:52.455993 containerd[1502]: time="2024-12-13T05:56:52.455928787Z" level=info msg="StartContainer for \"59a49ae6df7827f814fc14d6cc42fe4486ef7691db38e390bf05c8d6b9aaa70b\"" Dec 13 05:56:52.503616 systemd[1]: run-containerd-runc-k8s.io-59a49ae6df7827f814fc14d6cc42fe4486ef7691db38e390bf05c8d6b9aaa70b-runc.gj9wGl.mount: Deactivated successfully. Dec 13 05:56:52.517378 systemd[1]: Started cri-containerd-59a49ae6df7827f814fc14d6cc42fe4486ef7691db38e390bf05c8d6b9aaa70b.scope - libcontainer container 59a49ae6df7827f814fc14d6cc42fe4486ef7691db38e390bf05c8d6b9aaa70b. Dec 13 05:56:52.558470 containerd[1502]: time="2024-12-13T05:56:52.558418225Z" level=info msg="StartContainer for \"59a49ae6df7827f814fc14d6cc42fe4486ef7691db38e390bf05c8d6b9aaa70b\" returns successfully" Dec 13 05:56:52.763850 kubelet[1906]: E1213 05:56:52.763687 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:56:53.290450 containerd[1502]: time="2024-12-13T05:56:53.290319083Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 05:56:53.293948 systemd[1]: cri-containerd-59a49ae6df7827f814fc14d6cc42fe4486ef7691db38e390bf05c8d6b9aaa70b.scope: Deactivated successfully. Dec 13 05:56:53.323664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59a49ae6df7827f814fc14d6cc42fe4486ef7691db38e390bf05c8d6b9aaa70b-rootfs.mount: Deactivated successfully. Dec 13 05:56:53.370916 kubelet[1906]: I1213 05:56:53.370845 1906 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 05:56:53.617013 containerd[1502]: time="2024-12-13T05:56:53.616862738Z" level=info msg="shim disconnected" id=59a49ae6df7827f814fc14d6cc42fe4486ef7691db38e390bf05c8d6b9aaa70b namespace=k8s.io Dec 13 05:56:53.617913 containerd[1502]: time="2024-12-13T05:56:53.617685441Z" level=warning msg="cleaning up after shim disconnected" id=59a49ae6df7827f814fc14d6cc42fe4486ef7691db38e390bf05c8d6b9aaa70b namespace=k8s.io Dec 13 05:56:53.617913 containerd[1502]: time="2024-12-13T05:56:53.617727985Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 05:56:53.764425 kubelet[1906]: E1213 05:56:53.764372 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:56:53.887135 systemd[1]: Created slice kubepods-besteffort-pod434925fb_a29e_456c_8c09_f83da1b81015.slice - libcontainer container kubepods-besteffort-pod434925fb_a29e_456c_8c09_f83da1b81015.slice. Dec 13 05:56:53.891838 containerd[1502]: time="2024-12-13T05:56:53.891212467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cnc5j,Uid:434925fb-a29e-456c-8c09-f83da1b81015,Namespace:calico-system,Attempt:0,}" Dec 13 05:56:53.954537 containerd[1502]: time="2024-12-13T05:56:53.954061487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 05:56:53.983930 containerd[1502]: time="2024-12-13T05:56:53.983864141Z" level=error msg="Failed to destroy network for sandbox \"5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:56:53.986386 containerd[1502]: time="2024-12-13T05:56:53.984493632Z" level=error msg="encountered an error cleaning up failed sandbox \"5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:56:53.986386 containerd[1502]: time="2024-12-13T05:56:53.984597768Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cnc5j,Uid:434925fb-a29e-456c-8c09-f83da1b81015,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:56:53.985877 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e-shm.mount: Deactivated successfully. Dec 13 05:56:53.986624 kubelet[1906]: E1213 05:56:53.984903 1906 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:56:53.986624 kubelet[1906]: E1213 05:56:53.984996 1906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cnc5j" Dec 13 05:56:53.986624 kubelet[1906]: E1213 05:56:53.985036 1906 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cnc5j" Dec 13 05:56:53.986791 kubelet[1906]: E1213 05:56:53.985135 1906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cnc5j_calico-system(434925fb-a29e-456c-8c09-f83da1b81015)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cnc5j_calico-system(434925fb-a29e-456c-8c09-f83da1b81015)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cnc5j" podUID="434925fb-a29e-456c-8c09-f83da1b81015" Dec 13 05:56:54.765668 kubelet[1906]: E1213 05:56:54.765553 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:56:54.952549 kubelet[1906]: I1213 05:56:54.952486 1906 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" Dec 13 05:56:54.953498 containerd[1502]: time="2024-12-13T05:56:54.953453640Z" level=info msg="StopPodSandbox for \"5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e\"" Dec 13 05:56:54.954037 containerd[1502]: time="2024-12-13T05:56:54.953686809Z" level=info msg="Ensure that sandbox 5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e in task-service has been cleanup successfully" Dec 13 05:56:54.985435 containerd[1502]: time="2024-12-13T05:56:54.985338549Z" level=error msg="StopPodSandbox for \"5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e\" failed" error="failed to destroy network for sandbox \"5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:56:54.985863 kubelet[1906]: E1213 05:56:54.985778 1906 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" Dec 13 05:56:54.985977 kubelet[1906]: E1213 05:56:54.985900 1906 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e"} Dec 13 05:56:54.986038 kubelet[1906]: E1213 05:56:54.986010 1906 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"434925fb-a29e-456c-8c09-f83da1b81015\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 05:56:54.986277 kubelet[1906]: E1213 05:56:54.986075 1906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"434925fb-a29e-456c-8c09-f83da1b81015\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cnc5j" podUID="434925fb-a29e-456c-8c09-f83da1b81015" Dec 13 05:56:55.766718 kubelet[1906]: E1213 05:56:55.766591 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:56:56.767588 kubelet[1906]: E1213 05:56:56.767529 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:56:57.000572 update_engine[1483]: I20241213 05:56:57.000238 1483 update_attempter.cc:509] Updating boot flags... Dec 13 05:56:57.066676 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2458) Dec 13 05:56:57.174134 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2457) Dec 13 05:56:57.769293 kubelet[1906]: E1213 05:56:57.769217 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:56:58.741142 systemd[1]: Created slice kubepods-besteffort-pod4a7f0a20_2756_46a0_9f4e_263ff75e5d10.slice - libcontainer container kubepods-besteffort-pod4a7f0a20_2756_46a0_9f4e_263ff75e5d10.slice. Dec 13 05:56:58.769935 kubelet[1906]: E1213 05:56:58.769884 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:56:58.831167 kubelet[1906]: I1213 05:56:58.831112 1906 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxh5p\" (UniqueName: \"kubernetes.io/projected/4a7f0a20-2756-46a0-9f4e-263ff75e5d10-kube-api-access-bxh5p\") pod \"nginx-deployment-8587fbcb89-vcpvj\" (UID: \"4a7f0a20-2756-46a0-9f4e-263ff75e5d10\") " pod="default/nginx-deployment-8587fbcb89-vcpvj" Dec 13 05:56:59.047665 containerd[1502]: time="2024-12-13T05:56:59.047266065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-vcpvj,Uid:4a7f0a20-2756-46a0-9f4e-263ff75e5d10,Namespace:default,Attempt:0,}" Dec 13 05:56:59.186542 containerd[1502]: time="2024-12-13T05:56:59.184421742Z" level=error msg="Failed to destroy network for sandbox \"d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:56:59.188618 containerd[1502]: time="2024-12-13T05:56:59.186986013Z" level=error msg="encountered an error cleaning up failed sandbox \"d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:56:59.188618 containerd[1502]: time="2024-12-13T05:56:59.187104846Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-vcpvj,Uid:4a7f0a20-2756-46a0-9f4e-263ff75e5d10,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:56:59.187776 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd-shm.mount: Deactivated successfully. Dec 13 05:56:59.188974 kubelet[1906]: E1213 05:56:59.187464 1906 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:56:59.188974 kubelet[1906]: E1213 05:56:59.187582 1906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-vcpvj" Dec 13 05:56:59.188974 kubelet[1906]: E1213 05:56:59.187636 1906 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-vcpvj" Dec 13 05:56:59.190218 kubelet[1906]: E1213 05:56:59.187716 1906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-vcpvj_default(4a7f0a20-2756-46a0-9f4e-263ff75e5d10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-vcpvj_default(4a7f0a20-2756-46a0-9f4e-263ff75e5d10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-vcpvj" podUID="4a7f0a20-2756-46a0-9f4e-263ff75e5d10" Dec 13 05:56:59.752596 kubelet[1906]: E1213 05:56:59.752499 1906 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:56:59.771127 kubelet[1906]: E1213 05:56:59.770688 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:56:59.973637 kubelet[1906]: I1213 05:56:59.972546 1906 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" Dec 13 05:56:59.974230 containerd[1502]: time="2024-12-13T05:56:59.973558860Z" level=info msg="StopPodSandbox for \"d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd\"" Dec 13 05:56:59.974230 containerd[1502]: time="2024-12-13T05:56:59.973954995Z" level=info msg="Ensure that sandbox d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd in task-service has been cleanup successfully" Dec 13 05:57:00.026275 containerd[1502]: time="2024-12-13T05:57:00.024488244Z" level=error msg="StopPodSandbox for \"d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd\" failed" error="failed to destroy network for sandbox \"d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:57:00.026463 kubelet[1906]: E1213 05:57:00.025080 1906 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" Dec 13 05:57:00.026463 kubelet[1906]: E1213 05:57:00.025177 1906 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd"} Dec 13 05:57:00.026463 kubelet[1906]: E1213 05:57:00.025237 1906 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4a7f0a20-2756-46a0-9f4e-263ff75e5d10\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 05:57:00.026463 kubelet[1906]: E1213 05:57:00.025279 1906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4a7f0a20-2756-46a0-9f4e-263ff75e5d10\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-vcpvj" podUID="4a7f0a20-2756-46a0-9f4e-263ff75e5d10" Dec 13 05:57:00.771657 kubelet[1906]: E1213 05:57:00.771525 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:01.774129 kubelet[1906]: E1213 05:57:01.773836 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:01.875560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1018130762.mount: Deactivated successfully. Dec 13 05:57:01.936975 containerd[1502]: time="2024-12-13T05:57:01.936867262Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:57:01.939771 containerd[1502]: time="2024-12-13T05:57:01.939701802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 05:57:01.940823 containerd[1502]: time="2024-12-13T05:57:01.940766714Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:57:01.944392 containerd[1502]: time="2024-12-13T05:57:01.944080365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:57:01.945007 containerd[1502]: time="2024-12-13T05:57:01.944965254Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.990806407s" Dec 13 05:57:01.945103 containerd[1502]: time="2024-12-13T05:57:01.945020290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 05:57:01.965341 containerd[1502]: time="2024-12-13T05:57:01.965288275Z" level=info msg="CreateContainer within sandbox \"070c02faa536fa7c73ab708cdb69e417265913b4109f2bd4749390bcdbe7d129\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 05:57:01.985002 containerd[1502]: time="2024-12-13T05:57:01.984883820Z" level=info msg="CreateContainer within sandbox \"070c02faa536fa7c73ab708cdb69e417265913b4109f2bd4749390bcdbe7d129\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3e7481748530edb7494d938851eb83d517ad7b94a949e06f7c2ed22208ab0f99\"" Dec 13 05:57:01.985746 containerd[1502]: time="2024-12-13T05:57:01.985582567Z" level=info msg="StartContainer for \"3e7481748530edb7494d938851eb83d517ad7b94a949e06f7c2ed22208ab0f99\"" Dec 13 05:57:02.076318 systemd[1]: Started cri-containerd-3e7481748530edb7494d938851eb83d517ad7b94a949e06f7c2ed22208ab0f99.scope - libcontainer container 3e7481748530edb7494d938851eb83d517ad7b94a949e06f7c2ed22208ab0f99. Dec 13 05:57:02.116692 containerd[1502]: time="2024-12-13T05:57:02.116638226Z" level=info msg="StartContainer for \"3e7481748530edb7494d938851eb83d517ad7b94a949e06f7c2ed22208ab0f99\" returns successfully" Dec 13 05:57:02.213147 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 05:57:02.213338 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 05:57:02.774852 kubelet[1906]: E1213 05:57:02.774755 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:03.011357 kubelet[1906]: I1213 05:57:03.011243 1906 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fnmxm" podStartSLOduration=4.344561485 podStartE2EDuration="23.011211995s" podCreationTimestamp="2024-12-13 05:56:40 +0000 UTC" firstStartedPulling="2024-12-13 05:56:42.667294421 +0000 UTC m=+3.992157220" lastFinishedPulling="2024-12-13 05:57:01.946922205 +0000 UTC m=+22.658807730" observedRunningTime="2024-12-13 05:57:03.010731388 +0000 UTC m=+23.722616929" watchObservedRunningTime="2024-12-13 05:57:03.011211995 +0000 UTC m=+23.723097534" Dec 13 05:57:03.775955 kubelet[1906]: E1213 05:57:03.775803 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:03.857124 kernel: bpftool[2702]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 05:57:03.989806 kubelet[1906]: I1213 05:57:03.989172 1906 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 05:57:04.159243 systemd-networkd[1430]: vxlan.calico: Link UP Dec 13 05:57:04.160295 systemd-networkd[1430]: vxlan.calico: Gained carrier Dec 13 05:57:04.776817 kubelet[1906]: E1213 05:57:04.776702 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:05.778004 kubelet[1906]: E1213 05:57:05.777925 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:05.837389 systemd-networkd[1430]: vxlan.calico: Gained IPv6LL Dec 13 05:57:05.881538 containerd[1502]: time="2024-12-13T05:57:05.880518361Z" level=info msg="StopPodSandbox for \"5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e\"" Dec 13 05:57:06.068784 containerd[1502]: 2024-12-13 05:57:05.963 [INFO][2799] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" Dec 13 05:57:06.068784 containerd[1502]: 2024-12-13 05:57:05.963 [INFO][2799] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" iface="eth0" netns="/var/run/netns/cni-9b6874a8-efce-f5cb-1cd5-7dd6a838d0d9" Dec 13 05:57:06.068784 containerd[1502]: 2024-12-13 05:57:05.963 [INFO][2799] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" iface="eth0" netns="/var/run/netns/cni-9b6874a8-efce-f5cb-1cd5-7dd6a838d0d9" Dec 13 05:57:06.068784 containerd[1502]: 2024-12-13 05:57:05.964 [INFO][2799] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" iface="eth0" netns="/var/run/netns/cni-9b6874a8-efce-f5cb-1cd5-7dd6a838d0d9" Dec 13 05:57:06.068784 containerd[1502]: 2024-12-13 05:57:05.964 [INFO][2799] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" Dec 13 05:57:06.068784 containerd[1502]: 2024-12-13 05:57:05.964 [INFO][2799] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" Dec 13 05:57:06.068784 containerd[1502]: 2024-12-13 05:57:06.034 [INFO][2805] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" HandleID="k8s-pod-network.5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" Workload="10.243.75.98-k8s-csi--node--driver--cnc5j-eth0" Dec 13 05:57:06.068784 containerd[1502]: 2024-12-13 05:57:06.035 [INFO][2805] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:57:06.068784 containerd[1502]: 2024-12-13 05:57:06.035 [INFO][2805] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:57:06.068784 containerd[1502]: 2024-12-13 05:57:06.062 [WARNING][2805] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" HandleID="k8s-pod-network.5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" Workload="10.243.75.98-k8s-csi--node--driver--cnc5j-eth0" Dec 13 05:57:06.068784 containerd[1502]: 2024-12-13 05:57:06.062 [INFO][2805] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" HandleID="k8s-pod-network.5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" Workload="10.243.75.98-k8s-csi--node--driver--cnc5j-eth0" Dec 13 05:57:06.068784 containerd[1502]: 2024-12-13 05:57:06.065 [INFO][2805] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:57:06.068784 containerd[1502]: 2024-12-13 05:57:06.067 [INFO][2799] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" Dec 13 05:57:06.071576 containerd[1502]: time="2024-12-13T05:57:06.071229897Z" level=info msg="TearDown network for sandbox \"5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e\" successfully" Dec 13 05:57:06.071576 containerd[1502]: time="2024-12-13T05:57:06.071288479Z" level=info msg="StopPodSandbox for \"5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e\" returns successfully" Dec 13 05:57:06.071991 systemd[1]: run-netns-cni\x2d9b6874a8\x2defce\x2df5cb\x2d1cd5\x2d7dd6a838d0d9.mount: Deactivated successfully. Dec 13 05:57:06.073473 containerd[1502]: time="2024-12-13T05:57:06.072515049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cnc5j,Uid:434925fb-a29e-456c-8c09-f83da1b81015,Namespace:calico-system,Attempt:1,}" Dec 13 05:57:06.241059 systemd-networkd[1430]: caliee7ce394052: Link UP Dec 13 05:57:06.241543 systemd-networkd[1430]: caliee7ce394052: Gained carrier Dec 13 05:57:06.263684 containerd[1502]: 2024-12-13 05:57:06.132 [INFO][2813] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.243.75.98-k8s-csi--node--driver--cnc5j-eth0 csi-node-driver- calico-system 434925fb-a29e-456c-8c09-f83da1b81015 1075 0 2024-12-13 05:56:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.243.75.98 csi-node-driver-cnc5j eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliee7ce394052 [] []}} ContainerID="a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665" Namespace="calico-system" Pod="csi-node-driver-cnc5j" WorkloadEndpoint="10.243.75.98-k8s-csi--node--driver--cnc5j-" Dec 13 05:57:06.263684 containerd[1502]: 2024-12-13 05:57:06.132 [INFO][2813] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665" Namespace="calico-system" Pod="csi-node-driver-cnc5j" WorkloadEndpoint="10.243.75.98-k8s-csi--node--driver--cnc5j-eth0" Dec 13 05:57:06.263684 containerd[1502]: 2024-12-13 05:57:06.168 [INFO][2824] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665" HandleID="k8s-pod-network.a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665" Workload="10.243.75.98-k8s-csi--node--driver--cnc5j-eth0" Dec 13 05:57:06.263684 containerd[1502]: 2024-12-13 05:57:06.183 [INFO][2824] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665" HandleID="k8s-pod-network.a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665" Workload="10.243.75.98-k8s-csi--node--driver--cnc5j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003194e0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.243.75.98", "pod":"csi-node-driver-cnc5j", "timestamp":"2024-12-13 05:57:06.168297259 +0000 UTC"}, Hostname:"10.243.75.98", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 05:57:06.263684 containerd[1502]: 2024-12-13 05:57:06.183 [INFO][2824] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:57:06.263684 containerd[1502]: 2024-12-13 05:57:06.183 [INFO][2824] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:57:06.263684 containerd[1502]: 2024-12-13 05:57:06.183 [INFO][2824] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.243.75.98' Dec 13 05:57:06.263684 containerd[1502]: 2024-12-13 05:57:06.186 [INFO][2824] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665" host="10.243.75.98" Dec 13 05:57:06.263684 containerd[1502]: 2024-12-13 05:57:06.193 [INFO][2824] ipam/ipam.go 372: Looking up existing affinities for host host="10.243.75.98" Dec 13 05:57:06.263684 containerd[1502]: 2024-12-13 05:57:06.200 [INFO][2824] ipam/ipam.go 489: Trying affinity for 192.168.111.128/26 host="10.243.75.98" Dec 13 05:57:06.263684 containerd[1502]: 2024-12-13 05:57:06.202 [INFO][2824] ipam/ipam.go 155: Attempting to load block cidr=192.168.111.128/26 host="10.243.75.98" Dec 13 05:57:06.263684 containerd[1502]: 2024-12-13 05:57:06.208 [INFO][2824] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.111.128/26 host="10.243.75.98" Dec 13 05:57:06.263684 containerd[1502]: 2024-12-13 05:57:06.208 [INFO][2824] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.111.128/26 handle="k8s-pod-network.a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665" host="10.243.75.98" Dec 13 05:57:06.263684 containerd[1502]: 2024-12-13 05:57:06.211 [INFO][2824] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665 Dec 13 05:57:06.263684 containerd[1502]: 2024-12-13 05:57:06.217 [INFO][2824] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.111.128/26 handle="k8s-pod-network.a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665" host="10.243.75.98" Dec 13 05:57:06.263684 containerd[1502]: 2024-12-13 05:57:06.226 [INFO][2824] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.111.129/26] block=192.168.111.128/26 handle="k8s-pod-network.a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665" host="10.243.75.98" Dec 13 05:57:06.263684 containerd[1502]: 2024-12-13 05:57:06.226 [INFO][2824] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.111.129/26] handle="k8s-pod-network.a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665" host="10.243.75.98" Dec 13 05:57:06.263684 containerd[1502]: 2024-12-13 05:57:06.226 [INFO][2824] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:57:06.263684 containerd[1502]: 2024-12-13 05:57:06.226 [INFO][2824] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.111.129/26] IPv6=[] ContainerID="a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665" HandleID="k8s-pod-network.a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665" Workload="10.243.75.98-k8s-csi--node--driver--cnc5j-eth0" Dec 13 05:57:06.264971 containerd[1502]: 2024-12-13 05:57:06.228 [INFO][2813] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665" Namespace="calico-system" Pod="csi-node-driver-cnc5j" WorkloadEndpoint="10.243.75.98-k8s-csi--node--driver--cnc5j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.243.75.98-k8s-csi--node--driver--cnc5j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"434925fb-a29e-456c-8c09-f83da1b81015", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 56, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.243.75.98", ContainerID:"", Pod:"csi-node-driver-cnc5j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.111.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliee7ce394052", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:57:06.264971 containerd[1502]: 2024-12-13 05:57:06.229 [INFO][2813] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.111.129/32] ContainerID="a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665" Namespace="calico-system" Pod="csi-node-driver-cnc5j" WorkloadEndpoint="10.243.75.98-k8s-csi--node--driver--cnc5j-eth0" Dec 13 05:57:06.264971 containerd[1502]: 2024-12-13 05:57:06.229 [INFO][2813] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee7ce394052 ContainerID="a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665" Namespace="calico-system" Pod="csi-node-driver-cnc5j" WorkloadEndpoint="10.243.75.98-k8s-csi--node--driver--cnc5j-eth0" Dec 13 05:57:06.264971 containerd[1502]: 2024-12-13 05:57:06.243 [INFO][2813] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665" Namespace="calico-system" Pod="csi-node-driver-cnc5j" WorkloadEndpoint="10.243.75.98-k8s-csi--node--driver--cnc5j-eth0" Dec 13 05:57:06.264971 containerd[1502]: 2024-12-13 05:57:06.243 [INFO][2813] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665" Namespace="calico-system" Pod="csi-node-driver-cnc5j" WorkloadEndpoint="10.243.75.98-k8s-csi--node--driver--cnc5j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.243.75.98-k8s-csi--node--driver--cnc5j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"434925fb-a29e-456c-8c09-f83da1b81015", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 56, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.243.75.98", ContainerID:"a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665", Pod:"csi-node-driver-cnc5j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.111.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliee7ce394052", MAC:"4e:6b:1e:44:87:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:57:06.264971 containerd[1502]: 2024-12-13 05:57:06.261 [INFO][2813] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665" Namespace="calico-system" Pod="csi-node-driver-cnc5j" WorkloadEndpoint="10.243.75.98-k8s-csi--node--driver--cnc5j-eth0" Dec 13 05:57:06.312975 containerd[1502]: time="2024-12-13T05:57:06.312726891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:57:06.312975 containerd[1502]: time="2024-12-13T05:57:06.312855303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:57:06.312975 containerd[1502]: time="2024-12-13T05:57:06.312898584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:57:06.313456 containerd[1502]: time="2024-12-13T05:57:06.313038488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:57:06.352370 systemd[1]: Started cri-containerd-a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665.scope - libcontainer container a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665. Dec 13 05:57:06.386356 containerd[1502]: time="2024-12-13T05:57:06.386189225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cnc5j,Uid:434925fb-a29e-456c-8c09-f83da1b81015,Namespace:calico-system,Attempt:1,} returns sandbox id \"a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665\"" Dec 13 05:57:06.388673 containerd[1502]: time="2024-12-13T05:57:06.388643579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 05:57:06.779173 kubelet[1906]: E1213 05:57:06.778908 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:06.817215 kubelet[1906]: I1213 05:57:06.816896 1906 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 05:57:07.074631 systemd[1]: run-containerd-runc-k8s.io-a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665-runc.q21rBm.mount: Deactivated successfully. Dec 13 05:57:07.565269 systemd-networkd[1430]: caliee7ce394052: Gained IPv6LL Dec 13 05:57:07.741307 containerd[1502]: time="2024-12-13T05:57:07.740422544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:57:07.742969 containerd[1502]: time="2024-12-13T05:57:07.742906671Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 05:57:07.743893 containerd[1502]: time="2024-12-13T05:57:07.743817922Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:57:07.746553 containerd[1502]: time="2024-12-13T05:57:07.746497145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:57:07.748013 containerd[1502]: time="2024-12-13T05:57:07.747373313Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.358690065s" Dec 13 05:57:07.748013 containerd[1502]: time="2024-12-13T05:57:07.747433461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 05:57:07.750533 containerd[1502]: time="2024-12-13T05:57:07.750499923Z" level=info msg="CreateContainer within sandbox \"a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 05:57:07.775107 containerd[1502]: time="2024-12-13T05:57:07.775031668Z" level=info msg="CreateContainer within sandbox \"a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0b5b47f6a8aefcf3f3c53822faba5a0d31b9f9f9bc7fe1bbde744f7b401fa9dd\"" Dec 13 05:57:07.776052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount128562497.mount: Deactivated successfully. Dec 13 05:57:07.777700 containerd[1502]: time="2024-12-13T05:57:07.777646818Z" level=info msg="StartContainer for \"0b5b47f6a8aefcf3f3c53822faba5a0d31b9f9f9bc7fe1bbde744f7b401fa9dd\"" Dec 13 05:57:07.779179 kubelet[1906]: E1213 05:57:07.779129 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:07.822320 systemd[1]: Started cri-containerd-0b5b47f6a8aefcf3f3c53822faba5a0d31b9f9f9bc7fe1bbde744f7b401fa9dd.scope - libcontainer container 0b5b47f6a8aefcf3f3c53822faba5a0d31b9f9f9bc7fe1bbde744f7b401fa9dd. Dec 13 05:57:07.862652 containerd[1502]: time="2024-12-13T05:57:07.862582230Z" level=info msg="StartContainer for \"0b5b47f6a8aefcf3f3c53822faba5a0d31b9f9f9bc7fe1bbde744f7b401fa9dd\" returns successfully" Dec 13 05:57:07.865377 containerd[1502]: time="2024-12-13T05:57:07.864383055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 05:57:08.780240 kubelet[1906]: E1213 05:57:08.780077 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:09.366308 containerd[1502]: time="2024-12-13T05:57:09.366228884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:57:09.368765 containerd[1502]: time="2024-12-13T05:57:09.368692776Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 05:57:09.369757 containerd[1502]: time="2024-12-13T05:57:09.369687805Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:57:09.372547 containerd[1502]: time="2024-12-13T05:57:09.372485427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:57:09.373959 containerd[1502]: time="2024-12-13T05:57:09.373569290Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.50913786s" Dec 13 05:57:09.373959 containerd[1502]: time="2024-12-13T05:57:09.373637196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 05:57:09.377275 containerd[1502]: time="2024-12-13T05:57:09.377223483Z" level=info msg="CreateContainer within sandbox \"a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 05:57:09.395174 containerd[1502]: time="2024-12-13T05:57:09.394927714Z" level=info msg="CreateContainer within sandbox \"a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d87349568226a18eed222fc0b3b07d6d0048bb9d8096f5e03c0b1f16bb097f44\"" Dec 13 05:57:09.397752 containerd[1502]: time="2024-12-13T05:57:09.396025116Z" level=info msg="StartContainer for \"d87349568226a18eed222fc0b3b07d6d0048bb9d8096f5e03c0b1f16bb097f44\"" Dec 13 05:57:09.434989 systemd[1]: run-containerd-runc-k8s.io-d87349568226a18eed222fc0b3b07d6d0048bb9d8096f5e03c0b1f16bb097f44-runc.mCFPQh.mount: Deactivated successfully. Dec 13 05:57:09.448315 systemd[1]: Started cri-containerd-d87349568226a18eed222fc0b3b07d6d0048bb9d8096f5e03c0b1f16bb097f44.scope - libcontainer container d87349568226a18eed222fc0b3b07d6d0048bb9d8096f5e03c0b1f16bb097f44. Dec 13 05:57:09.483730 containerd[1502]: time="2024-12-13T05:57:09.483683037Z" level=info msg="StartContainer for \"d87349568226a18eed222fc0b3b07d6d0048bb9d8096f5e03c0b1f16bb097f44\" returns successfully" Dec 13 05:57:09.780797 kubelet[1906]: E1213 05:57:09.780630 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:09.899783 kubelet[1906]: I1213 05:57:09.899730 1906 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 05:57:09.899944 kubelet[1906]: I1213 05:57:09.899801 1906 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 05:57:10.781469 kubelet[1906]: E1213 05:57:10.781391 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:11.782342 kubelet[1906]: E1213 05:57:11.782260 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:11.880311 containerd[1502]: time="2024-12-13T05:57:11.879791033Z" level=info msg="StopPodSandbox for \"d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd\"" Dec 13 05:57:11.933648 kubelet[1906]: I1213 05:57:11.933557 1906 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cnc5j" podStartSLOduration=28.946384488 podStartE2EDuration="31.933525363s" podCreationTimestamp="2024-12-13 05:56:40 +0000 UTC" firstStartedPulling="2024-12-13 05:57:06.38818873 +0000 UTC m=+27.100074261" lastFinishedPulling="2024-12-13 05:57:09.375329593 +0000 UTC m=+30.087215136" observedRunningTime="2024-12-13 05:57:10.032129553 +0000 UTC m=+30.744015104" watchObservedRunningTime="2024-12-13 05:57:11.933525363 +0000 UTC m=+32.645410912" Dec 13 05:57:11.980227 containerd[1502]: 2024-12-13 05:57:11.934 [INFO][3026] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" Dec 13 05:57:11.980227 containerd[1502]: 2024-12-13 05:57:11.934 [INFO][3026] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" iface="eth0" netns="/var/run/netns/cni-470926be-801a-5df5-72ed-05351079ff0d" Dec 13 05:57:11.980227 containerd[1502]: 2024-12-13 05:57:11.934 [INFO][3026] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" iface="eth0" netns="/var/run/netns/cni-470926be-801a-5df5-72ed-05351079ff0d" Dec 13 05:57:11.980227 containerd[1502]: 2024-12-13 05:57:11.934 [INFO][3026] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" iface="eth0" netns="/var/run/netns/cni-470926be-801a-5df5-72ed-05351079ff0d" Dec 13 05:57:11.980227 containerd[1502]: 2024-12-13 05:57:11.934 [INFO][3026] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" Dec 13 05:57:11.980227 containerd[1502]: 2024-12-13 05:57:11.934 [INFO][3026] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" Dec 13 05:57:11.980227 containerd[1502]: 2024-12-13 05:57:11.965 [INFO][3032] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" HandleID="k8s-pod-network.d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" Workload="10.243.75.98-k8s-nginx--deployment--8587fbcb89--vcpvj-eth0" Dec 13 05:57:11.980227 containerd[1502]: 2024-12-13 05:57:11.965 [INFO][3032] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:57:11.980227 containerd[1502]: 2024-12-13 05:57:11.965 [INFO][3032] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:57:11.980227 containerd[1502]: 2024-12-13 05:57:11.975 [WARNING][3032] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" HandleID="k8s-pod-network.d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" Workload="10.243.75.98-k8s-nginx--deployment--8587fbcb89--vcpvj-eth0" Dec 13 05:57:11.980227 containerd[1502]: 2024-12-13 05:57:11.975 [INFO][3032] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" HandleID="k8s-pod-network.d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" Workload="10.243.75.98-k8s-nginx--deployment--8587fbcb89--vcpvj-eth0" Dec 13 05:57:11.980227 containerd[1502]: 2024-12-13 05:57:11.977 [INFO][3032] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:57:11.980227 containerd[1502]: 2024-12-13 05:57:11.979 [INFO][3026] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" Dec 13 05:57:11.982293 containerd[1502]: time="2024-12-13T05:57:11.982237754Z" level=info msg="TearDown network for sandbox \"d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd\" successfully" Dec 13 05:57:11.982293 containerd[1502]: time="2024-12-13T05:57:11.982277463Z" level=info msg="StopPodSandbox for \"d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd\" returns successfully" Dec 13 05:57:11.984146 containerd[1502]: time="2024-12-13T05:57:11.983250401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-vcpvj,Uid:4a7f0a20-2756-46a0-9f4e-263ff75e5d10,Namespace:default,Attempt:1,}" Dec 13 05:57:11.983902 systemd[1]: run-netns-cni\x2d470926be\x2d801a\x2d5df5\x2d72ed\x2d05351079ff0d.mount: Deactivated successfully. Dec 13 05:57:12.127838 systemd-networkd[1430]: cali27eb4987a4c: Link UP Dec 13 05:57:12.128452 systemd-networkd[1430]: cali27eb4987a4c: Gained carrier Dec 13 05:57:12.141076 containerd[1502]: 2024-12-13 05:57:12.039 [INFO][3038] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.243.75.98-k8s-nginx--deployment--8587fbcb89--vcpvj-eth0 nginx-deployment-8587fbcb89- default 4a7f0a20-2756-46a0-9f4e-263ff75e5d10 1108 0 2024-12-13 05:56:58 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.243.75.98 nginx-deployment-8587fbcb89-vcpvj eth0 default [] [] [kns.default ksa.default.default] cali27eb4987a4c [] []}} ContainerID="44a8ef25b809421771ab54364f7d924a1cee50ad99557d151011007fe6778589" Namespace="default" Pod="nginx-deployment-8587fbcb89-vcpvj" WorkloadEndpoint="10.243.75.98-k8s-nginx--deployment--8587fbcb89--vcpvj-" Dec 13 05:57:12.141076 containerd[1502]: 2024-12-13 05:57:12.039 [INFO][3038] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="44a8ef25b809421771ab54364f7d924a1cee50ad99557d151011007fe6778589" Namespace="default" Pod="nginx-deployment-8587fbcb89-vcpvj" WorkloadEndpoint="10.243.75.98-k8s-nginx--deployment--8587fbcb89--vcpvj-eth0" Dec 13 05:57:12.141076 containerd[1502]: 2024-12-13 05:57:12.073 [INFO][3050] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44a8ef25b809421771ab54364f7d924a1cee50ad99557d151011007fe6778589" HandleID="k8s-pod-network.44a8ef25b809421771ab54364f7d924a1cee50ad99557d151011007fe6778589" Workload="10.243.75.98-k8s-nginx--deployment--8587fbcb89--vcpvj-eth0" Dec 13 05:57:12.141076 containerd[1502]: 2024-12-13 05:57:12.088 [INFO][3050] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="44a8ef25b809421771ab54364f7d924a1cee50ad99557d151011007fe6778589" HandleID="k8s-pod-network.44a8ef25b809421771ab54364f7d924a1cee50ad99557d151011007fe6778589" Workload="10.243.75.98-k8s-nginx--deployment--8587fbcb89--vcpvj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002916d0), Attrs:map[string]string{"namespace":"default", "node":"10.243.75.98", "pod":"nginx-deployment-8587fbcb89-vcpvj", "timestamp":"2024-12-13 05:57:12.073617648 +0000 UTC"}, Hostname:"10.243.75.98", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 05:57:12.141076 containerd[1502]: 2024-12-13 05:57:12.088 [INFO][3050] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:57:12.141076 containerd[1502]: 2024-12-13 05:57:12.088 [INFO][3050] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:57:12.141076 containerd[1502]: 2024-12-13 05:57:12.088 [INFO][3050] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.243.75.98' Dec 13 05:57:12.141076 containerd[1502]: 2024-12-13 05:57:12.091 [INFO][3050] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.44a8ef25b809421771ab54364f7d924a1cee50ad99557d151011007fe6778589" host="10.243.75.98" Dec 13 05:57:12.141076 containerd[1502]: 2024-12-13 05:57:12.097 [INFO][3050] ipam/ipam.go 372: Looking up existing affinities for host host="10.243.75.98" Dec 13 05:57:12.141076 containerd[1502]: 2024-12-13 05:57:12.102 [INFO][3050] ipam/ipam.go 489: Trying affinity for 192.168.111.128/26 host="10.243.75.98" Dec 13 05:57:12.141076 containerd[1502]: 2024-12-13 05:57:12.105 [INFO][3050] ipam/ipam.go 155: Attempting to load block cidr=192.168.111.128/26 host="10.243.75.98" Dec 13 05:57:12.141076 containerd[1502]: 2024-12-13 05:57:12.108 [INFO][3050] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.111.128/26 host="10.243.75.98" Dec 13 05:57:12.141076 containerd[1502]: 2024-12-13 05:57:12.108 [INFO][3050] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.111.128/26 handle="k8s-pod-network.44a8ef25b809421771ab54364f7d924a1cee50ad99557d151011007fe6778589" host="10.243.75.98" Dec 13 05:57:12.141076 containerd[1502]: 2024-12-13 05:57:12.110 [INFO][3050] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.44a8ef25b809421771ab54364f7d924a1cee50ad99557d151011007fe6778589 Dec 13 05:57:12.141076 containerd[1502]: 2024-12-13 05:57:12.115 [INFO][3050] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.111.128/26 handle="k8s-pod-network.44a8ef25b809421771ab54364f7d924a1cee50ad99557d151011007fe6778589" host="10.243.75.98" Dec 13 05:57:12.141076 containerd[1502]: 2024-12-13 05:57:12.122 [INFO][3050] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.111.130/26] block=192.168.111.128/26 handle="k8s-pod-network.44a8ef25b809421771ab54364f7d924a1cee50ad99557d151011007fe6778589" host="10.243.75.98" Dec 13 05:57:12.141076 containerd[1502]: 2024-12-13 05:57:12.122 [INFO][3050] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.111.130/26] handle="k8s-pod-network.44a8ef25b809421771ab54364f7d924a1cee50ad99557d151011007fe6778589" host="10.243.75.98" Dec 13 05:57:12.141076 containerd[1502]: 2024-12-13 05:57:12.122 [INFO][3050] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:57:12.141076 containerd[1502]: 2024-12-13 05:57:12.122 [INFO][3050] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.111.130/26] IPv6=[] ContainerID="44a8ef25b809421771ab54364f7d924a1cee50ad99557d151011007fe6778589" HandleID="k8s-pod-network.44a8ef25b809421771ab54364f7d924a1cee50ad99557d151011007fe6778589" Workload="10.243.75.98-k8s-nginx--deployment--8587fbcb89--vcpvj-eth0" Dec 13 05:57:12.142160 containerd[1502]: 2024-12-13 05:57:12.123 [INFO][3038] cni-plugin/k8s.go 386: Populated endpoint ContainerID="44a8ef25b809421771ab54364f7d924a1cee50ad99557d151011007fe6778589" Namespace="default" Pod="nginx-deployment-8587fbcb89-vcpvj" WorkloadEndpoint="10.243.75.98-k8s-nginx--deployment--8587fbcb89--vcpvj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.243.75.98-k8s-nginx--deployment--8587fbcb89--vcpvj-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"4a7f0a20-2756-46a0-9f4e-263ff75e5d10", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 56, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.243.75.98", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-vcpvj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.111.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali27eb4987a4c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:57:12.142160 containerd[1502]: 2024-12-13 05:57:12.124 [INFO][3038] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.111.130/32] ContainerID="44a8ef25b809421771ab54364f7d924a1cee50ad99557d151011007fe6778589" Namespace="default" Pod="nginx-deployment-8587fbcb89-vcpvj" WorkloadEndpoint="10.243.75.98-k8s-nginx--deployment--8587fbcb89--vcpvj-eth0" Dec 13 05:57:12.142160 containerd[1502]: 2024-12-13 05:57:12.124 [INFO][3038] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali27eb4987a4c ContainerID="44a8ef25b809421771ab54364f7d924a1cee50ad99557d151011007fe6778589" Namespace="default" Pod="nginx-deployment-8587fbcb89-vcpvj" WorkloadEndpoint="10.243.75.98-k8s-nginx--deployment--8587fbcb89--vcpvj-eth0" Dec 13 05:57:12.142160 containerd[1502]: 2024-12-13 05:57:12.129 [INFO][3038] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="44a8ef25b809421771ab54364f7d924a1cee50ad99557d151011007fe6778589" Namespace="default" Pod="nginx-deployment-8587fbcb89-vcpvj" WorkloadEndpoint="10.243.75.98-k8s-nginx--deployment--8587fbcb89--vcpvj-eth0" Dec 13 05:57:12.142160 containerd[1502]: 2024-12-13 05:57:12.130 [INFO][3038] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="44a8ef25b809421771ab54364f7d924a1cee50ad99557d151011007fe6778589" Namespace="default" Pod="nginx-deployment-8587fbcb89-vcpvj" WorkloadEndpoint="10.243.75.98-k8s-nginx--deployment--8587fbcb89--vcpvj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.243.75.98-k8s-nginx--deployment--8587fbcb89--vcpvj-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"4a7f0a20-2756-46a0-9f4e-263ff75e5d10", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 56, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.243.75.98", ContainerID:"44a8ef25b809421771ab54364f7d924a1cee50ad99557d151011007fe6778589", Pod:"nginx-deployment-8587fbcb89-vcpvj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.111.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali27eb4987a4c", MAC:"82:c4:33:ed:6e:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:57:12.142160 containerd[1502]: 2024-12-13 05:57:12.138 [INFO][3038] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="44a8ef25b809421771ab54364f7d924a1cee50ad99557d151011007fe6778589" Namespace="default" Pod="nginx-deployment-8587fbcb89-vcpvj" WorkloadEndpoint="10.243.75.98-k8s-nginx--deployment--8587fbcb89--vcpvj-eth0" Dec 13 05:57:12.183793 containerd[1502]: time="2024-12-13T05:57:12.183403071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:57:12.183793 containerd[1502]: time="2024-12-13T05:57:12.183485816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:57:12.183793 containerd[1502]: time="2024-12-13T05:57:12.183500554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:57:12.183793 containerd[1502]: time="2024-12-13T05:57:12.183618829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:57:12.219396 systemd[1]: Started cri-containerd-44a8ef25b809421771ab54364f7d924a1cee50ad99557d151011007fe6778589.scope - libcontainer container 44a8ef25b809421771ab54364f7d924a1cee50ad99557d151011007fe6778589. Dec 13 05:57:12.279028 containerd[1502]: time="2024-12-13T05:57:12.278967211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-vcpvj,Uid:4a7f0a20-2756-46a0-9f4e-263ff75e5d10,Namespace:default,Attempt:1,} returns sandbox id \"44a8ef25b809421771ab54364f7d924a1cee50ad99557d151011007fe6778589\"" Dec 13 05:57:12.282230 containerd[1502]: time="2024-12-13T05:57:12.282153084Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 05:57:12.782783 kubelet[1906]: E1213 05:57:12.782704 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:13.783450 kubelet[1906]: E1213 05:57:13.783300 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:13.966823 systemd-networkd[1430]: cali27eb4987a4c: Gained IPv6LL Dec 13 05:57:14.785304 kubelet[1906]: E1213 05:57:14.785244 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:15.787073 kubelet[1906]: E1213 05:57:15.786955 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:15.830284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3539138059.mount: Deactivated successfully. Dec 13 05:57:16.787711 kubelet[1906]: E1213 05:57:16.787628 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:17.483153 containerd[1502]: time="2024-12-13T05:57:17.481646906Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:57:17.485115 containerd[1502]: time="2024-12-13T05:57:17.485020188Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036027" Dec 13 05:57:17.486514 containerd[1502]: time="2024-12-13T05:57:17.486482069Z" level=info msg="ImageCreate event name:\"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:57:17.498553 containerd[1502]: time="2024-12-13T05:57:17.498498602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:57:17.500545 containerd[1502]: time="2024-12-13T05:57:17.500505177Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"71035905\" in 5.218231448s" Dec 13 05:57:17.500636 containerd[1502]: time="2024-12-13T05:57:17.500560425Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 05:57:17.515158 containerd[1502]: time="2024-12-13T05:57:17.515124603Z" level=info msg="CreateContainer within sandbox \"44a8ef25b809421771ab54364f7d924a1cee50ad99557d151011007fe6778589\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 05:57:17.533517 containerd[1502]: time="2024-12-13T05:57:17.533437943Z" level=info msg="CreateContainer within sandbox \"44a8ef25b809421771ab54364f7d924a1cee50ad99557d151011007fe6778589\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"5bd1ef1168070b6c46c6e92dabff6448743d24a260a7048b5b90da7fd2e78dec\"" Dec 13 05:57:17.534079 containerd[1502]: time="2024-12-13T05:57:17.534042197Z" level=info msg="StartContainer for \"5bd1ef1168070b6c46c6e92dabff6448743d24a260a7048b5b90da7fd2e78dec\"" Dec 13 05:57:17.591402 systemd[1]: Started cri-containerd-5bd1ef1168070b6c46c6e92dabff6448743d24a260a7048b5b90da7fd2e78dec.scope - libcontainer container 5bd1ef1168070b6c46c6e92dabff6448743d24a260a7048b5b90da7fd2e78dec. Dec 13 05:57:17.639210 containerd[1502]: time="2024-12-13T05:57:17.639142327Z" level=info msg="StartContainer for \"5bd1ef1168070b6c46c6e92dabff6448743d24a260a7048b5b90da7fd2e78dec\" returns successfully" Dec 13 05:57:17.788183 kubelet[1906]: E1213 05:57:17.787919 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:18.074879 kubelet[1906]: I1213 05:57:18.074787 1906 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-vcpvj" podStartSLOduration=14.853273099 podStartE2EDuration="20.074748093s" podCreationTimestamp="2024-12-13 05:56:58 +0000 UTC" firstStartedPulling="2024-12-13 05:57:12.281040416 +0000 UTC m=+32.992925941" lastFinishedPulling="2024-12-13 05:57:17.502515408 +0000 UTC m=+38.214400935" observedRunningTime="2024-12-13 05:57:18.074431106 +0000 UTC m=+38.786316641" watchObservedRunningTime="2024-12-13 05:57:18.074748093 +0000 UTC m=+38.786633626" Dec 13 05:57:18.788926 kubelet[1906]: E1213 05:57:18.788755 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:19.752721 kubelet[1906]: E1213 05:57:19.752647 1906 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:19.790033 kubelet[1906]: E1213 05:57:19.789957 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:20.790841 kubelet[1906]: E1213 05:57:20.790722 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:21.791517 kubelet[1906]: E1213 05:57:21.791447 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:22.792413 kubelet[1906]: E1213 05:57:22.792309 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:23.793578 kubelet[1906]: E1213 05:57:23.793505 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:24.794174 kubelet[1906]: E1213 05:57:24.794128 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:25.794844 kubelet[1906]: E1213 05:57:25.794780 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:26.796040 kubelet[1906]: E1213 05:57:26.795961 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:27.655317 systemd[1]: Created slice kubepods-besteffort-pode46703b1_89b4_452c_a2c0_325900720791.slice - libcontainer container kubepods-besteffort-pode46703b1_89b4_452c_a2c0_325900720791.slice. Dec 13 05:57:27.717120 kubelet[1906]: I1213 05:57:27.717068 1906 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc9t6\" (UniqueName: \"kubernetes.io/projected/e46703b1-89b4-452c-a2c0-325900720791-kube-api-access-fc9t6\") pod \"nfs-server-provisioner-0\" (UID: \"e46703b1-89b4-452c-a2c0-325900720791\") " pod="default/nfs-server-provisioner-0" Dec 13 05:57:27.717436 kubelet[1906]: I1213 05:57:27.717330 1906 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/e46703b1-89b4-452c-a2c0-325900720791-data\") pod \"nfs-server-provisioner-0\" (UID: \"e46703b1-89b4-452c-a2c0-325900720791\") " pod="default/nfs-server-provisioner-0" Dec 13 05:57:27.796876 kubelet[1906]: E1213 05:57:27.796822 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:27.959766 containerd[1502]: time="2024-12-13T05:57:27.959539091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:e46703b1-89b4-452c-a2c0-325900720791,Namespace:default,Attempt:0,}" Dec 13 05:57:28.143259 systemd-networkd[1430]: cali60e51b789ff: Link UP Dec 13 05:57:28.143661 systemd-networkd[1430]: cali60e51b789ff: Gained carrier Dec 13 05:57:28.158627 containerd[1502]: 2024-12-13 05:57:28.026 [INFO][3219] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.243.75.98-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default e46703b1-89b4-452c-a2c0-325900720791 1167 0 2024-12-13 05:57:27 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.243.75.98 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="2576caa797566384a8938efe9caeed42550148adb4637e751ac9c171caf71e1d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.243.75.98-k8s-nfs--server--provisioner--0-" Dec 13 05:57:28.158627 containerd[1502]: 2024-12-13 05:57:28.026 [INFO][3219] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2576caa797566384a8938efe9caeed42550148adb4637e751ac9c171caf71e1d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.243.75.98-k8s-nfs--server--provisioner--0-eth0" Dec 13 05:57:28.158627 containerd[1502]: 2024-12-13 05:57:28.086 [INFO][3230] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2576caa797566384a8938efe9caeed42550148adb4637e751ac9c171caf71e1d" HandleID="k8s-pod-network.2576caa797566384a8938efe9caeed42550148adb4637e751ac9c171caf71e1d" Workload="10.243.75.98-k8s-nfs--server--provisioner--0-eth0" Dec 13 05:57:28.158627 containerd[1502]: 2024-12-13 05:57:28.101 [INFO][3230] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2576caa797566384a8938efe9caeed42550148adb4637e751ac9c171caf71e1d" HandleID="k8s-pod-network.2576caa797566384a8938efe9caeed42550148adb4637e751ac9c171caf71e1d" Workload="10.243.75.98-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050bd0), Attrs:map[string]string{"namespace":"default", "node":"10.243.75.98", "pod":"nfs-server-provisioner-0", "timestamp":"2024-12-13 05:57:28.08682268 +0000 UTC"}, Hostname:"10.243.75.98", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 05:57:28.158627 containerd[1502]: 2024-12-13 05:57:28.101 [INFO][3230] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:57:28.158627 containerd[1502]: 2024-12-13 05:57:28.101 [INFO][3230] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:57:28.158627 containerd[1502]: 2024-12-13 05:57:28.101 [INFO][3230] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.243.75.98' Dec 13 05:57:28.158627 containerd[1502]: 2024-12-13 05:57:28.104 [INFO][3230] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2576caa797566384a8938efe9caeed42550148adb4637e751ac9c171caf71e1d" host="10.243.75.98" Dec 13 05:57:28.158627 containerd[1502]: 2024-12-13 05:57:28.110 [INFO][3230] ipam/ipam.go 372: Looking up existing affinities for host host="10.243.75.98" Dec 13 05:57:28.158627 containerd[1502]: 2024-12-13 05:57:28.116 [INFO][3230] ipam/ipam.go 489: Trying affinity for 192.168.111.128/26 host="10.243.75.98" Dec 13 05:57:28.158627 containerd[1502]: 2024-12-13 05:57:28.119 [INFO][3230] ipam/ipam.go 155: Attempting to load block cidr=192.168.111.128/26 host="10.243.75.98" Dec 13 05:57:28.158627 containerd[1502]: 2024-12-13 05:57:28.122 [INFO][3230] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.111.128/26 host="10.243.75.98" Dec 13 05:57:28.158627 containerd[1502]: 2024-12-13 05:57:28.122 [INFO][3230] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.111.128/26 handle="k8s-pod-network.2576caa797566384a8938efe9caeed42550148adb4637e751ac9c171caf71e1d" host="10.243.75.98" Dec 13 05:57:28.158627 containerd[1502]: 2024-12-13 05:57:28.124 [INFO][3230] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2576caa797566384a8938efe9caeed42550148adb4637e751ac9c171caf71e1d Dec 13 05:57:28.158627 containerd[1502]: 2024-12-13 05:57:28.129 [INFO][3230] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.111.128/26 handle="k8s-pod-network.2576caa797566384a8938efe9caeed42550148adb4637e751ac9c171caf71e1d" host="10.243.75.98" Dec 13 05:57:28.158627 containerd[1502]: 2024-12-13 05:57:28.136 [INFO][3230] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.111.131/26] block=192.168.111.128/26 handle="k8s-pod-network.2576caa797566384a8938efe9caeed42550148adb4637e751ac9c171caf71e1d" host="10.243.75.98" Dec 13 05:57:28.158627 containerd[1502]: 2024-12-13 05:57:28.136 [INFO][3230] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.111.131/26] handle="k8s-pod-network.2576caa797566384a8938efe9caeed42550148adb4637e751ac9c171caf71e1d" host="10.243.75.98" Dec 13 05:57:28.158627 containerd[1502]: 2024-12-13 05:57:28.136 [INFO][3230] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:57:28.158627 containerd[1502]: 2024-12-13 05:57:28.136 [INFO][3230] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.111.131/26] IPv6=[] ContainerID="2576caa797566384a8938efe9caeed42550148adb4637e751ac9c171caf71e1d" HandleID="k8s-pod-network.2576caa797566384a8938efe9caeed42550148adb4637e751ac9c171caf71e1d" Workload="10.243.75.98-k8s-nfs--server--provisioner--0-eth0" Dec 13 05:57:28.163573 containerd[1502]: 2024-12-13 05:57:28.138 [INFO][3219] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2576caa797566384a8938efe9caeed42550148adb4637e751ac9c171caf71e1d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.243.75.98-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.243.75.98-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"e46703b1-89b4-452c-a2c0-325900720791", ResourceVersion:"1167", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 57, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.243.75.98", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.111.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:57:28.163573 containerd[1502]: 2024-12-13 05:57:28.138 [INFO][3219] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.111.131/32] ContainerID="2576caa797566384a8938efe9caeed42550148adb4637e751ac9c171caf71e1d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.243.75.98-k8s-nfs--server--provisioner--0-eth0" Dec 13 05:57:28.163573 containerd[1502]: 2024-12-13 05:57:28.138 [INFO][3219] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="2576caa797566384a8938efe9caeed42550148adb4637e751ac9c171caf71e1d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.243.75.98-k8s-nfs--server--provisioner--0-eth0" Dec 13 05:57:28.163573 containerd[1502]: 2024-12-13 05:57:28.144 [INFO][3219] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2576caa797566384a8938efe9caeed42550148adb4637e751ac9c171caf71e1d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.243.75.98-k8s-nfs--server--provisioner--0-eth0" Dec 13 05:57:28.164537 containerd[1502]: 2024-12-13 05:57:28.145 [INFO][3219] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2576caa797566384a8938efe9caeed42550148adb4637e751ac9c171caf71e1d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.243.75.98-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.243.75.98-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"e46703b1-89b4-452c-a2c0-325900720791", ResourceVersion:"1167", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 57, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.243.75.98", ContainerID:"2576caa797566384a8938efe9caeed42550148adb4637e751ac9c171caf71e1d", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.111.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"72:6c:b3:a0:5d:8c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:57:28.164537 containerd[1502]: 2024-12-13 05:57:28.156 [INFO][3219] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2576caa797566384a8938efe9caeed42550148adb4637e751ac9c171caf71e1d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.243.75.98-k8s-nfs--server--provisioner--0-eth0" Dec 13 05:57:28.193515 containerd[1502]: time="2024-12-13T05:57:28.192675360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:57:28.193515 containerd[1502]: time="2024-12-13T05:57:28.193470859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:57:28.193898 containerd[1502]: time="2024-12-13T05:57:28.193497511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:57:28.193898 containerd[1502]: time="2024-12-13T05:57:28.193699310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:57:28.217358 systemd[1]: run-containerd-runc-k8s.io-2576caa797566384a8938efe9caeed42550148adb4637e751ac9c171caf71e1d-runc.zGy8Ov.mount: Deactivated successfully. Dec 13 05:57:28.226292 systemd[1]: Started cri-containerd-2576caa797566384a8938efe9caeed42550148adb4637e751ac9c171caf71e1d.scope - libcontainer container 2576caa797566384a8938efe9caeed42550148adb4637e751ac9c171caf71e1d. Dec 13 05:57:28.284347 containerd[1502]: time="2024-12-13T05:57:28.284233791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:e46703b1-89b4-452c-a2c0-325900720791,Namespace:default,Attempt:0,} returns sandbox id \"2576caa797566384a8938efe9caeed42550148adb4637e751ac9c171caf71e1d\"" Dec 13 05:57:28.287584 containerd[1502]: time="2024-12-13T05:57:28.287222930Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 05:57:28.798723 kubelet[1906]: E1213 05:57:28.797790 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:29.262429 systemd-networkd[1430]: cali60e51b789ff: Gained IPv6LL Dec 13 05:57:29.798606 kubelet[1906]: E1213 05:57:29.798474 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:30.799961 kubelet[1906]: E1213 05:57:30.799797 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:31.800930 kubelet[1906]: E1213 05:57:31.800663 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:31.936589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3569034542.mount: Deactivated successfully. Dec 13 05:57:32.802150 kubelet[1906]: E1213 05:57:32.802021 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:33.802927 kubelet[1906]: E1213 05:57:33.802779 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:34.804006 kubelet[1906]: E1213 05:57:34.803719 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:34.855468 containerd[1502]: time="2024-12-13T05:57:34.855335910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:57:34.856985 containerd[1502]: time="2024-12-13T05:57:34.856251565Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Dec 13 05:57:34.857673 containerd[1502]: time="2024-12-13T05:57:34.857597153Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:57:34.861696 containerd[1502]: time="2024-12-13T05:57:34.861641755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:57:34.863067 containerd[1502]: time="2024-12-13T05:57:34.863022684Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.575666615s" Dec 13 05:57:34.863174 containerd[1502]: time="2024-12-13T05:57:34.863107624Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 05:57:34.878749 containerd[1502]: time="2024-12-13T05:57:34.878626354Z" level=info msg="CreateContainer within sandbox \"2576caa797566384a8938efe9caeed42550148adb4637e751ac9c171caf71e1d\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 05:57:34.905255 containerd[1502]: time="2024-12-13T05:57:34.905125874Z" level=info msg="CreateContainer within sandbox \"2576caa797566384a8938efe9caeed42550148adb4637e751ac9c171caf71e1d\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"fb24f66e75f167c219c5008044c453b8fd710cbd5e3f93c669e4667003e9d3b3\"" Dec 13 05:57:34.906510 containerd[1502]: time="2024-12-13T05:57:34.905615012Z" level=info msg="StartContainer for \"fb24f66e75f167c219c5008044c453b8fd710cbd5e3f93c669e4667003e9d3b3\"" Dec 13 05:57:34.955315 systemd[1]: Started cri-containerd-fb24f66e75f167c219c5008044c453b8fd710cbd5e3f93c669e4667003e9d3b3.scope - libcontainer container fb24f66e75f167c219c5008044c453b8fd710cbd5e3f93c669e4667003e9d3b3. Dec 13 05:57:34.993222 containerd[1502]: time="2024-12-13T05:57:34.992668015Z" level=info msg="StartContainer for \"fb24f66e75f167c219c5008044c453b8fd710cbd5e3f93c669e4667003e9d3b3\" returns successfully" Dec 13 05:57:35.134633 kubelet[1906]: I1213 05:57:35.134486 1906 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.544806634 podStartE2EDuration="8.134458237s" podCreationTimestamp="2024-12-13 05:57:27 +0000 UTC" firstStartedPulling="2024-12-13 05:57:28.286386172 +0000 UTC m=+48.998271700" lastFinishedPulling="2024-12-13 05:57:34.876037775 +0000 UTC m=+55.587923303" observedRunningTime="2024-12-13 05:57:35.134035512 +0000 UTC m=+55.845921051" watchObservedRunningTime="2024-12-13 05:57:35.134458237 +0000 UTC m=+55.846343772" Dec 13 05:57:35.804433 kubelet[1906]: E1213 05:57:35.804364 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:36.804851 kubelet[1906]: E1213 05:57:36.804774 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:37.805691 kubelet[1906]: E1213 05:57:37.805626 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:38.806999 kubelet[1906]: E1213 05:57:38.806841 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:39.753430 kubelet[1906]: E1213 05:57:39.752902 1906 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:39.777554 containerd[1502]: time="2024-12-13T05:57:39.777455217Z" level=info msg="StopPodSandbox for \"5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e\"" Dec 13 05:57:39.807650 kubelet[1906]: E1213 05:57:39.807574 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:39.882132 containerd[1502]: 2024-12-13 05:57:39.834 [WARNING][3427] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.243.75.98-k8s-csi--node--driver--cnc5j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"434925fb-a29e-456c-8c09-f83da1b81015", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 56, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.243.75.98", ContainerID:"a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665", Pod:"csi-node-driver-cnc5j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.111.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliee7ce394052", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:57:39.882132 containerd[1502]: 2024-12-13 05:57:39.835 [INFO][3427] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" Dec 13 05:57:39.882132 containerd[1502]: 2024-12-13 05:57:39.835 [INFO][3427] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" iface="eth0" netns="" Dec 13 05:57:39.882132 containerd[1502]: 2024-12-13 05:57:39.835 [INFO][3427] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" Dec 13 05:57:39.882132 containerd[1502]: 2024-12-13 05:57:39.835 [INFO][3427] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" Dec 13 05:57:39.882132 containerd[1502]: 2024-12-13 05:57:39.864 [INFO][3433] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" HandleID="k8s-pod-network.5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" Workload="10.243.75.98-k8s-csi--node--driver--cnc5j-eth0" Dec 13 05:57:39.882132 containerd[1502]: 2024-12-13 05:57:39.865 [INFO][3433] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:57:39.882132 containerd[1502]: 2024-12-13 05:57:39.865 [INFO][3433] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:57:39.882132 containerd[1502]: 2024-12-13 05:57:39.875 [WARNING][3433] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" HandleID="k8s-pod-network.5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" Workload="10.243.75.98-k8s-csi--node--driver--cnc5j-eth0" Dec 13 05:57:39.882132 containerd[1502]: 2024-12-13 05:57:39.875 [INFO][3433] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" HandleID="k8s-pod-network.5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" Workload="10.243.75.98-k8s-csi--node--driver--cnc5j-eth0" Dec 13 05:57:39.882132 containerd[1502]: 2024-12-13 05:57:39.877 [INFO][3433] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:57:39.882132 containerd[1502]: 2024-12-13 05:57:39.878 [INFO][3427] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" Dec 13 05:57:39.885809 containerd[1502]: time="2024-12-13T05:57:39.882137424Z" level=info msg="TearDown network for sandbox \"5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e\" successfully" Dec 13 05:57:39.885809 containerd[1502]: time="2024-12-13T05:57:39.882172580Z" level=info msg="StopPodSandbox for \"5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e\" returns successfully" Dec 13 05:57:39.888964 containerd[1502]: time="2024-12-13T05:57:39.888604569Z" level=info msg="RemovePodSandbox for \"5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e\"" Dec 13 05:57:39.888964 containerd[1502]: time="2024-12-13T05:57:39.888653776Z" level=info msg="Forcibly stopping sandbox \"5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e\"" Dec 13 05:57:39.980428 containerd[1502]: 2024-12-13 05:57:39.940 [WARNING][3453] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.243.75.98-k8s-csi--node--driver--cnc5j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"434925fb-a29e-456c-8c09-f83da1b81015", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 56, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.243.75.98", ContainerID:"a2de708601836daf0a8c3442cc320b085cc00b77f003a52fee1055a365a62665", Pod:"csi-node-driver-cnc5j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.111.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliee7ce394052", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:57:39.980428 containerd[1502]: 2024-12-13 05:57:39.940 [INFO][3453] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" Dec 13 05:57:39.980428 containerd[1502]: 2024-12-13 05:57:39.940 [INFO][3453] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" iface="eth0" netns="" Dec 13 05:57:39.980428 containerd[1502]: 2024-12-13 05:57:39.940 [INFO][3453] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" Dec 13 05:57:39.980428 containerd[1502]: 2024-12-13 05:57:39.940 [INFO][3453] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" Dec 13 05:57:39.980428 containerd[1502]: 2024-12-13 05:57:39.966 [INFO][3459] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" HandleID="k8s-pod-network.5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" Workload="10.243.75.98-k8s-csi--node--driver--cnc5j-eth0" Dec 13 05:57:39.980428 containerd[1502]: 2024-12-13 05:57:39.966 [INFO][3459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:57:39.980428 containerd[1502]: 2024-12-13 05:57:39.966 [INFO][3459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:57:39.980428 containerd[1502]: 2024-12-13 05:57:39.975 [WARNING][3459] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" HandleID="k8s-pod-network.5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" Workload="10.243.75.98-k8s-csi--node--driver--cnc5j-eth0" Dec 13 05:57:39.980428 containerd[1502]: 2024-12-13 05:57:39.975 [INFO][3459] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" HandleID="k8s-pod-network.5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" Workload="10.243.75.98-k8s-csi--node--driver--cnc5j-eth0" Dec 13 05:57:39.980428 containerd[1502]: 2024-12-13 05:57:39.978 [INFO][3459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:57:39.980428 containerd[1502]: 2024-12-13 05:57:39.979 [INFO][3453] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e" Dec 13 05:57:39.980428 containerd[1502]: time="2024-12-13T05:57:39.980440152Z" level=info msg="TearDown network for sandbox \"5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e\" successfully" Dec 13 05:57:40.012938 containerd[1502]: time="2024-12-13T05:57:40.012780658Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 05:57:40.012938 containerd[1502]: time="2024-12-13T05:57:40.012911090Z" level=info msg="RemovePodSandbox \"5cb7c3523bd107566204d634023c55c58d8c7c1d13881a27cecc372fb04a3f6e\" returns successfully" Dec 13 05:57:40.014882 containerd[1502]: time="2024-12-13T05:57:40.013821904Z" level=info msg="StopPodSandbox for \"d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd\"" Dec 13 05:57:40.136676 containerd[1502]: 2024-12-13 05:57:40.062 [WARNING][3477] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.243.75.98-k8s-nginx--deployment--8587fbcb89--vcpvj-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"4a7f0a20-2756-46a0-9f4e-263ff75e5d10", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 56, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.243.75.98", ContainerID:"44a8ef25b809421771ab54364f7d924a1cee50ad99557d151011007fe6778589", Pod:"nginx-deployment-8587fbcb89-vcpvj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.111.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali27eb4987a4c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:57:40.136676 containerd[1502]: 2024-12-13 05:57:40.062 [INFO][3477] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" Dec 13 05:57:40.136676 containerd[1502]: 2024-12-13 05:57:40.062 [INFO][3477] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" iface="eth0" netns="" Dec 13 05:57:40.136676 containerd[1502]: 2024-12-13 05:57:40.062 [INFO][3477] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" Dec 13 05:57:40.136676 containerd[1502]: 2024-12-13 05:57:40.062 [INFO][3477] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" Dec 13 05:57:40.136676 containerd[1502]: 2024-12-13 05:57:40.116 [INFO][3483] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" HandleID="k8s-pod-network.d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" Workload="10.243.75.98-k8s-nginx--deployment--8587fbcb89--vcpvj-eth0" Dec 13 05:57:40.136676 containerd[1502]: 2024-12-13 05:57:40.116 [INFO][3483] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:57:40.136676 containerd[1502]: 2024-12-13 05:57:40.117 [INFO][3483] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:57:40.136676 containerd[1502]: 2024-12-13 05:57:40.129 [WARNING][3483] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" HandleID="k8s-pod-network.d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" Workload="10.243.75.98-k8s-nginx--deployment--8587fbcb89--vcpvj-eth0" Dec 13 05:57:40.136676 containerd[1502]: 2024-12-13 05:57:40.129 [INFO][3483] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" HandleID="k8s-pod-network.d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" Workload="10.243.75.98-k8s-nginx--deployment--8587fbcb89--vcpvj-eth0" Dec 13 05:57:40.136676 containerd[1502]: 2024-12-13 05:57:40.131 [INFO][3483] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:57:40.136676 containerd[1502]: 2024-12-13 05:57:40.135 [INFO][3477] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" Dec 13 05:57:40.137805 containerd[1502]: time="2024-12-13T05:57:40.137614691Z" level=info msg="TearDown network for sandbox \"d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd\" successfully" Dec 13 05:57:40.137805 containerd[1502]: time="2024-12-13T05:57:40.137670182Z" level=info msg="StopPodSandbox for \"d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd\" returns successfully" Dec 13 05:57:40.138469 containerd[1502]: time="2024-12-13T05:57:40.138431079Z" level=info msg="RemovePodSandbox for \"d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd\"" Dec 13 05:57:40.138584 containerd[1502]: time="2024-12-13T05:57:40.138473835Z" level=info msg="Forcibly stopping sandbox \"d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd\"" Dec 13 05:57:40.236425 containerd[1502]: 2024-12-13 05:57:40.191 [WARNING][3505] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.243.75.98-k8s-nginx--deployment--8587fbcb89--vcpvj-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"4a7f0a20-2756-46a0-9f4e-263ff75e5d10", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 56, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.243.75.98", ContainerID:"44a8ef25b809421771ab54364f7d924a1cee50ad99557d151011007fe6778589", Pod:"nginx-deployment-8587fbcb89-vcpvj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.111.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali27eb4987a4c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:57:40.236425 containerd[1502]: 2024-12-13 05:57:40.192 [INFO][3505] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" Dec 13 05:57:40.236425 containerd[1502]: 2024-12-13 05:57:40.192 [INFO][3505] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" iface="eth0" netns="" Dec 13 05:57:40.236425 containerd[1502]: 2024-12-13 05:57:40.192 [INFO][3505] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" Dec 13 05:57:40.236425 containerd[1502]: 2024-12-13 05:57:40.192 [INFO][3505] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" Dec 13 05:57:40.236425 containerd[1502]: 2024-12-13 05:57:40.220 [INFO][3511] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" HandleID="k8s-pod-network.d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" Workload="10.243.75.98-k8s-nginx--deployment--8587fbcb89--vcpvj-eth0" Dec 13 05:57:40.236425 containerd[1502]: 2024-12-13 05:57:40.220 [INFO][3511] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:57:40.236425 containerd[1502]: 2024-12-13 05:57:40.220 [INFO][3511] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:57:40.236425 containerd[1502]: 2024-12-13 05:57:40.230 [WARNING][3511] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" HandleID="k8s-pod-network.d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" Workload="10.243.75.98-k8s-nginx--deployment--8587fbcb89--vcpvj-eth0" Dec 13 05:57:40.236425 containerd[1502]: 2024-12-13 05:57:40.230 [INFO][3511] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" HandleID="k8s-pod-network.d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" Workload="10.243.75.98-k8s-nginx--deployment--8587fbcb89--vcpvj-eth0" Dec 13 05:57:40.236425 containerd[1502]: 2024-12-13 05:57:40.233 [INFO][3511] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:57:40.236425 containerd[1502]: 2024-12-13 05:57:40.234 [INFO][3505] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd" Dec 13 05:57:40.236425 containerd[1502]: time="2024-12-13T05:57:40.236308884Z" level=info msg="TearDown network for sandbox \"d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd\" successfully" Dec 13 05:57:40.243235 containerd[1502]: time="2024-12-13T05:57:40.243190487Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 05:57:40.243636 containerd[1502]: time="2024-12-13T05:57:40.243257506Z" level=info msg="RemovePodSandbox \"d53ca0a9afc9e14d7c636903df9e3a44648384112c7841899a1f28a6291070bd\" returns successfully" Dec 13 05:57:40.808591 kubelet[1906]: E1213 05:57:40.808500 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:41.809643 kubelet[1906]: E1213 05:57:41.809568 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:42.810615 kubelet[1906]: E1213 05:57:42.810542 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:43.811528 kubelet[1906]: E1213 05:57:43.811469 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:44.734394 systemd[1]: Created slice kubepods-besteffort-pod384cd633_3247_4d25_a6b1_6da6f5f9b4e6.slice - libcontainer container kubepods-besteffort-pod384cd633_3247_4d25_a6b1_6da6f5f9b4e6.slice. Dec 13 05:57:44.812398 kubelet[1906]: E1213 05:57:44.812332 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:44.836360 kubelet[1906]: I1213 05:57:44.836293 1906 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-454f7\" (UniqueName: \"kubernetes.io/projected/384cd633-3247-4d25-a6b1-6da6f5f9b4e6-kube-api-access-454f7\") pod \"test-pod-1\" (UID: \"384cd633-3247-4d25-a6b1-6da6f5f9b4e6\") " pod="default/test-pod-1" Dec 13 05:57:44.836522 kubelet[1906]: I1213 05:57:44.836376 1906 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-81863d94-8e3c-49cc-ad2f-637aaec50195\" (UniqueName: \"kubernetes.io/nfs/384cd633-3247-4d25-a6b1-6da6f5f9b4e6-pvc-81863d94-8e3c-49cc-ad2f-637aaec50195\") pod \"test-pod-1\" (UID: \"384cd633-3247-4d25-a6b1-6da6f5f9b4e6\") " pod="default/test-pod-1" Dec 13 05:57:44.982308 kernel: FS-Cache: Loaded Dec 13 05:57:45.066680 kernel: RPC: Registered named UNIX socket transport module. Dec 13 05:57:45.066854 kernel: RPC: Registered udp transport module. Dec 13 05:57:45.066899 kernel: RPC: Registered tcp transport module. Dec 13 05:57:45.067582 kernel: RPC: Registered tcp-with-tls transport module. Dec 13 05:57:45.068633 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 05:57:45.385588 kernel: NFS: Registering the id_resolver key type Dec 13 05:57:45.385853 kernel: Key type id_resolver registered Dec 13 05:57:45.385896 kernel: Key type id_legacy registered Dec 13 05:57:45.434923 nfsidmap[3540]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Dec 13 05:57:45.443169 nfsidmap[3543]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Dec 13 05:57:45.640650 containerd[1502]: time="2024-12-13T05:57:45.639616557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:384cd633-3247-4d25-a6b1-6da6f5f9b4e6,Namespace:default,Attempt:0,}" Dec 13 05:57:45.796189 systemd-networkd[1430]: cali5ec59c6bf6e: Link UP Dec 13 05:57:45.798459 systemd-networkd[1430]: cali5ec59c6bf6e: Gained carrier Dec 13 05:57:45.810309 containerd[1502]: 2024-12-13 05:57:45.701 [INFO][3549] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.243.75.98-k8s-test--pod--1-eth0 default 384cd633-3247-4d25-a6b1-6da6f5f9b4e6 1238 0 2024-12-13 05:57:29 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.243.75.98 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="d8d52dee98956faa906961bd877c400b1a3f21007b19ce85c00bec446f9fc672" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.243.75.98-k8s-test--pod--1-" Dec 13 05:57:45.810309 containerd[1502]: 2024-12-13 05:57:45.701 [INFO][3549] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d8d52dee98956faa906961bd877c400b1a3f21007b19ce85c00bec446f9fc672" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.243.75.98-k8s-test--pod--1-eth0" Dec 13 05:57:45.810309 containerd[1502]: 2024-12-13 05:57:45.736 [INFO][3560] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8d52dee98956faa906961bd877c400b1a3f21007b19ce85c00bec446f9fc672" HandleID="k8s-pod-network.d8d52dee98956faa906961bd877c400b1a3f21007b19ce85c00bec446f9fc672" Workload="10.243.75.98-k8s-test--pod--1-eth0" Dec 13 05:57:45.810309 containerd[1502]: 2024-12-13 05:57:45.751 [INFO][3560] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d8d52dee98956faa906961bd877c400b1a3f21007b19ce85c00bec446f9fc672" HandleID="k8s-pod-network.d8d52dee98956faa906961bd877c400b1a3f21007b19ce85c00bec446f9fc672" Workload="10.243.75.98-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b4d60), Attrs:map[string]string{"namespace":"default", "node":"10.243.75.98", "pod":"test-pod-1", "timestamp":"2024-12-13 05:57:45.736126811 +0000 UTC"}, Hostname:"10.243.75.98", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 05:57:45.810309 containerd[1502]: 2024-12-13 05:57:45.751 [INFO][3560] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:57:45.810309 containerd[1502]: 2024-12-13 05:57:45.751 [INFO][3560] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:57:45.810309 containerd[1502]: 2024-12-13 05:57:45.751 [INFO][3560] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.243.75.98' Dec 13 05:57:45.810309 containerd[1502]: 2024-12-13 05:57:45.755 [INFO][3560] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d8d52dee98956faa906961bd877c400b1a3f21007b19ce85c00bec446f9fc672" host="10.243.75.98" Dec 13 05:57:45.810309 containerd[1502]: 2024-12-13 05:57:45.761 [INFO][3560] ipam/ipam.go 372: Looking up existing affinities for host host="10.243.75.98" Dec 13 05:57:45.810309 containerd[1502]: 2024-12-13 05:57:45.768 [INFO][3560] ipam/ipam.go 489: Trying affinity for 192.168.111.128/26 host="10.243.75.98" Dec 13 05:57:45.810309 containerd[1502]: 2024-12-13 05:57:45.770 [INFO][3560] ipam/ipam.go 155: Attempting to load block cidr=192.168.111.128/26 host="10.243.75.98" Dec 13 05:57:45.810309 containerd[1502]: 2024-12-13 05:57:45.774 [INFO][3560] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.111.128/26 host="10.243.75.98" Dec 13 05:57:45.810309 containerd[1502]: 2024-12-13 05:57:45.774 [INFO][3560] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.111.128/26 handle="k8s-pod-network.d8d52dee98956faa906961bd877c400b1a3f21007b19ce85c00bec446f9fc672" host="10.243.75.98" Dec 13 05:57:45.810309 containerd[1502]: 2024-12-13 05:57:45.777 [INFO][3560] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d8d52dee98956faa906961bd877c400b1a3f21007b19ce85c00bec446f9fc672 Dec 13 05:57:45.810309 containerd[1502]: 2024-12-13 05:57:45.782 [INFO][3560] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.111.128/26 handle="k8s-pod-network.d8d52dee98956faa906961bd877c400b1a3f21007b19ce85c00bec446f9fc672" host="10.243.75.98" Dec 13 05:57:45.810309 containerd[1502]: 2024-12-13 05:57:45.790 [INFO][3560] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.111.132/26] block=192.168.111.128/26 handle="k8s-pod-network.d8d52dee98956faa906961bd877c400b1a3f21007b19ce85c00bec446f9fc672" host="10.243.75.98" Dec 13 05:57:45.810309 containerd[1502]: 2024-12-13 05:57:45.790 [INFO][3560] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.111.132/26] handle="k8s-pod-network.d8d52dee98956faa906961bd877c400b1a3f21007b19ce85c00bec446f9fc672" host="10.243.75.98" Dec 13 05:57:45.810309 containerd[1502]: 2024-12-13 05:57:45.790 [INFO][3560] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:57:45.810309 containerd[1502]: 2024-12-13 05:57:45.790 [INFO][3560] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.111.132/26] IPv6=[] ContainerID="d8d52dee98956faa906961bd877c400b1a3f21007b19ce85c00bec446f9fc672" HandleID="k8s-pod-network.d8d52dee98956faa906961bd877c400b1a3f21007b19ce85c00bec446f9fc672" Workload="10.243.75.98-k8s-test--pod--1-eth0" Dec 13 05:57:45.812657 containerd[1502]: 2024-12-13 05:57:45.792 [INFO][3549] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d8d52dee98956faa906961bd877c400b1a3f21007b19ce85c00bec446f9fc672" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.243.75.98-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.243.75.98-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"384cd633-3247-4d25-a6b1-6da6f5f9b4e6", ResourceVersion:"1238", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 57, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.243.75.98", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.111.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:57:45.812657 containerd[1502]: 2024-12-13 05:57:45.792 [INFO][3549] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.111.132/32] ContainerID="d8d52dee98956faa906961bd877c400b1a3f21007b19ce85c00bec446f9fc672" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.243.75.98-k8s-test--pod--1-eth0" Dec 13 05:57:45.812657 containerd[1502]: 2024-12-13 05:57:45.792 [INFO][3549] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="d8d52dee98956faa906961bd877c400b1a3f21007b19ce85c00bec446f9fc672" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.243.75.98-k8s-test--pod--1-eth0" Dec 13 05:57:45.812657 containerd[1502]: 2024-12-13 05:57:45.798 [INFO][3549] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8d52dee98956faa906961bd877c400b1a3f21007b19ce85c00bec446f9fc672" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.243.75.98-k8s-test--pod--1-eth0" Dec 13 05:57:45.812657 containerd[1502]: 2024-12-13 05:57:45.799 [INFO][3549] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d8d52dee98956faa906961bd877c400b1a3f21007b19ce85c00bec446f9fc672" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.243.75.98-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.243.75.98-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"384cd633-3247-4d25-a6b1-6da6f5f9b4e6", ResourceVersion:"1238", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 57, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.243.75.98", ContainerID:"d8d52dee98956faa906961bd877c400b1a3f21007b19ce85c00bec446f9fc672", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.111.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"ba:0c:68:2a:e3:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:57:45.812657 containerd[1502]: 2024-12-13 05:57:45.808 [INFO][3549] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d8d52dee98956faa906961bd877c400b1a3f21007b19ce85c00bec446f9fc672" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.243.75.98-k8s-test--pod--1-eth0" Dec 13 05:57:45.816138 kubelet[1906]: E1213 05:57:45.814201 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:45.851148 containerd[1502]: time="2024-12-13T05:57:45.849570173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:57:45.851148 containerd[1502]: time="2024-12-13T05:57:45.849752003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:57:45.851148 containerd[1502]: time="2024-12-13T05:57:45.849796060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:57:45.851148 containerd[1502]: time="2024-12-13T05:57:45.850072409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:57:45.882293 systemd[1]: Started cri-containerd-d8d52dee98956faa906961bd877c400b1a3f21007b19ce85c00bec446f9fc672.scope - libcontainer container d8d52dee98956faa906961bd877c400b1a3f21007b19ce85c00bec446f9fc672. Dec 13 05:57:45.939626 containerd[1502]: time="2024-12-13T05:57:45.939560137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:384cd633-3247-4d25-a6b1-6da6f5f9b4e6,Namespace:default,Attempt:0,} returns sandbox id \"d8d52dee98956faa906961bd877c400b1a3f21007b19ce85c00bec446f9fc672\"" Dec 13 05:57:45.943209 containerd[1502]: time="2024-12-13T05:57:45.942861758Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 05:57:46.279621 containerd[1502]: time="2024-12-13T05:57:46.279478222Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:57:46.281437 containerd[1502]: time="2024-12-13T05:57:46.281332076Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Dec 13 05:57:46.293029 containerd[1502]: time="2024-12-13T05:57:46.292968768Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"71035905\" in 350.055216ms" Dec 13 05:57:46.293029 containerd[1502]: time="2024-12-13T05:57:46.293027064Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 05:57:46.296057 containerd[1502]: time="2024-12-13T05:57:46.295982984Z" level=info msg="CreateContainer within sandbox \"d8d52dee98956faa906961bd877c400b1a3f21007b19ce85c00bec446f9fc672\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 05:57:46.311296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3742529308.mount: Deactivated successfully. Dec 13 05:57:46.315289 containerd[1502]: time="2024-12-13T05:57:46.315234983Z" level=info msg="CreateContainer within sandbox \"d8d52dee98956faa906961bd877c400b1a3f21007b19ce85c00bec446f9fc672\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"037be7ce945a72867160e3affa3595593307aeb697b7e6aca51f59fd85766ec1\"" Dec 13 05:57:46.316140 containerd[1502]: time="2024-12-13T05:57:46.316039318Z" level=info msg="StartContainer for \"037be7ce945a72867160e3affa3595593307aeb697b7e6aca51f59fd85766ec1\"" Dec 13 05:57:46.362348 systemd[1]: Started cri-containerd-037be7ce945a72867160e3affa3595593307aeb697b7e6aca51f59fd85766ec1.scope - libcontainer container 037be7ce945a72867160e3affa3595593307aeb697b7e6aca51f59fd85766ec1. Dec 13 05:57:46.396266 containerd[1502]: time="2024-12-13T05:57:46.395830061Z" level=info msg="StartContainer for \"037be7ce945a72867160e3affa3595593307aeb697b7e6aca51f59fd85766ec1\" returns successfully" Dec 13 05:57:46.814786 kubelet[1906]: E1213 05:57:46.814699 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:47.815722 kubelet[1906]: E1213 05:57:47.815658 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:47.821468 systemd-networkd[1430]: cali5ec59c6bf6e: Gained IPv6LL Dec 13 05:57:48.815960 kubelet[1906]: E1213 05:57:48.815878 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:49.816463 kubelet[1906]: E1213 05:57:49.816395 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:50.817616 kubelet[1906]: E1213 05:57:50.817548 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:51.817838 kubelet[1906]: E1213 05:57:51.817732 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:52.818140 kubelet[1906]: E1213 05:57:52.817984 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:53.818625 kubelet[1906]: E1213 05:57:53.818545 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:54.819065 kubelet[1906]: E1213 05:57:54.819010 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:55.819798 kubelet[1906]: E1213 05:57:55.819731 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 05:57:56.821127 kubelet[1906]: E1213 05:57:56.820984 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"