Jan 14 13:08:21.186062 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 18:58:40 -00 2025
Jan 14 13:08:21.186693 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5
Jan 14 13:08:21.186709 kernel: BIOS-provided physical RAM map:
Jan 14 13:08:21.186720 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable
Jan 14 13:08:21.186730 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved
Jan 14 13:08:21.186740 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable
Jan 14 13:08:21.186754 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved
Jan 14 13:08:21.186765 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data
Jan 14 13:08:21.186780 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS
Jan 14 13:08:21.186791 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable
Jan 14 13:08:21.186802 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable
Jan 14 13:08:21.186813 kernel: printk: bootconsole [earlyser0] enabled
Jan 14 13:08:21.186824 kernel: NX (Execute Disable) protection: active
Jan 14 13:08:21.186836 kernel: APIC: Static calls initialized
Jan 14 13:08:21.186853 kernel: efi: EFI v2.7 by Microsoft
Jan 14 13:08:21.186866 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 RNG=0x3ffd1018 
Jan 14 13:08:21.186879 kernel: random: crng init done
Jan 14 13:08:21.186891 kernel: secureboot: Secure boot disabled
Jan 14 13:08:21.186904 kernel: SMBIOS 3.1.0 present.
Jan 14 13:08:21.186916 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024
Jan 14 13:08:21.186929 kernel: Hypervisor detected: Microsoft Hyper-V
Jan 14 13:08:21.186941 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2
Jan 14 13:08:21.186953 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0
Jan 14 13:08:21.186966 kernel: Hyper-V: Nested features: 0x1e0101
Jan 14 13:08:21.186994 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40
Jan 14 13:08:21.187006 kernel: Hyper-V: Using hypercall for remote TLB flush
Jan 14 13:08:21.187018 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns
Jan 14 13:08:21.187029 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns
Jan 14 13:08:21.187041 kernel: tsc: Marking TSC unstable due to running on Hyper-V
Jan 14 13:08:21.187054 kernel: tsc: Detected 2593.904 MHz processor
Jan 14 13:08:21.187066 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 14 13:08:21.187079 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 14 13:08:21.187091 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000
Jan 14 13:08:21.187107 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs
Jan 14 13:08:21.187120 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 14 13:08:21.187132 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved
Jan 14 13:08:21.187144 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000
Jan 14 13:08:21.187156 kernel: Using GB pages for direct mapping
Jan 14 13:08:21.187168 kernel: ACPI: Early table checksum verification disabled
Jan 14 13:08:21.187181 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL)
Jan 14 13:08:21.187198 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Jan 14 13:08:21.187212 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001)
Jan 14 13:08:21.187226 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01   00000001 MSFT 05000000)
Jan 14 13:08:21.187238 kernel: ACPI: FACS 0x000000003FFFE000 000040
Jan 14 13:08:21.187251 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Jan 14 13:08:21.187264 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001)
Jan 14 13:08:21.187276 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Jan 14 13:08:21.187292 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001)
Jan 14 13:08:21.187305 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001)
Jan 14 13:08:21.187318 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Jan 14 13:08:21.187331 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Jan 14 13:08:21.187344 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113]
Jan 14 13:08:21.187356 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183]
Jan 14 13:08:21.187369 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f]
Jan 14 13:08:21.187382 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063]
Jan 14 13:08:21.187395 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f]
Jan 14 13:08:21.187411 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027]
Jan 14 13:08:21.187424 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057]
Jan 14 13:08:21.187437 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf]
Jan 14 13:08:21.187449 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037]
Jan 14 13:08:21.187463 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033]
Jan 14 13:08:21.187475 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0
Jan 14 13:08:21.187488 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0
Jan 14 13:08:21.187501 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug
Jan 14 13:08:21.187514 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug
Jan 14 13:08:21.187529 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug
Jan 14 13:08:21.187542 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug
Jan 14 13:08:21.187555 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug
Jan 14 13:08:21.187567 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug
Jan 14 13:08:21.187580 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug
Jan 14 13:08:21.187593 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug
Jan 14 13:08:21.187605 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug
Jan 14 13:08:21.187619 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug
Jan 14 13:08:21.187634 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug
Jan 14 13:08:21.187647 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug
Jan 14 13:08:21.187660 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug
Jan 14 13:08:21.187672 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug
Jan 14 13:08:21.187685 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug
Jan 14 13:08:21.187698 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug
Jan 14 13:08:21.187711 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff]
Jan 14 13:08:21.187723 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff]
Jan 14 13:08:21.187736 kernel: Zone ranges:
Jan 14 13:08:21.187751 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 14 13:08:21.187764 kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 14 13:08:21.187777 kernel:   Normal   [mem 0x0000000100000000-0x00000002bfffffff]
Jan 14 13:08:21.187790 kernel: Movable zone start for each node
Jan 14 13:08:21.187802 kernel: Early memory node ranges
Jan 14 13:08:21.187815 kernel:   node   0: [mem 0x0000000000001000-0x000000000009ffff]
Jan 14 13:08:21.187828 kernel:   node   0: [mem 0x0000000000100000-0x000000003ff40fff]
Jan 14 13:08:21.187840 kernel:   node   0: [mem 0x000000003ffff000-0x000000003fffffff]
Jan 14 13:08:21.187853 kernel:   node   0: [mem 0x0000000100000000-0x00000002bfffffff]
Jan 14 13:08:21.187869 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff]
Jan 14 13:08:21.187881 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 14 13:08:21.187894 kernel: On node 0, zone DMA: 96 pages in unavailable ranges
Jan 14 13:08:21.187907 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges
Jan 14 13:08:21.187919 kernel: ACPI: PM-Timer IO Port: 0x408
Jan 14 13:08:21.187930 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
Jan 14 13:08:21.187942 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23
Jan 14 13:08:21.187954 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 14 13:08:21.187967 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 14 13:08:21.187995 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200
Jan 14 13:08:21.188007 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs
Jan 14 13:08:21.188019 kernel: [mem 0x40000000-0xffffffff] available for PCI devices
Jan 14 13:08:21.188046 kernel: Booting paravirtualized kernel on Hyper-V
Jan 14 13:08:21.188059 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 14 13:08:21.188071 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1
Jan 14 13:08:21.188082 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576
Jan 14 13:08:21.188095 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152
Jan 14 13:08:21.188107 kernel: pcpu-alloc: [0] 0 1 
Jan 14 13:08:21.188123 kernel: Hyper-V: PV spinlocks enabled
Jan 14 13:08:21.188140 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Jan 14 13:08:21.188154 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5
Jan 14 13:08:21.188168 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Jan 14 13:08:21.188180 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 14 13:08:21.188193 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 14 13:08:21.188206 kernel: Fallback order for Node 0: 0 
Jan 14 13:08:21.188220 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2062618
Jan 14 13:08:21.188238 kernel: Policy zone: Normal
Jan 14 13:08:21.188262 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 14 13:08:21.188277 kernel: software IO TLB: area num 2.
Jan 14 13:08:21.188294 kernel: Memory: 8075040K/8387460K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 312164K reserved, 0K cma-reserved)
Jan 14 13:08:21.188308 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Jan 14 13:08:21.188323 kernel: ftrace: allocating 37890 entries in 149 pages
Jan 14 13:08:21.188337 kernel: ftrace: allocated 149 pages with 4 groups
Jan 14 13:08:21.188351 kernel: Dynamic Preempt: voluntary
Jan 14 13:08:21.188365 kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 14 13:08:21.188380 kernel: rcu:         RCU event tracing is enabled.
Jan 14 13:08:21.188394 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Jan 14 13:08:21.188412 kernel:         Trampoline variant of Tasks RCU enabled.
Jan 14 13:08:21.188426 kernel:         Rude variant of Tasks RCU enabled.
Jan 14 13:08:21.188440 kernel:         Tracing variant of Tasks RCU enabled.
Jan 14 13:08:21.188454 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 14 13:08:21.188468 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Jan 14 13:08:21.188482 kernel: Using NULL legacy PIC
Jan 14 13:08:21.188498 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0
Jan 14 13:08:21.188512 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 14 13:08:21.188530 kernel: Console: colour dummy device 80x25
Jan 14 13:08:21.188543 kernel: printk: console [tty1] enabled
Jan 14 13:08:21.188557 kernel: printk: console [ttyS0] enabled
Jan 14 13:08:21.188571 kernel: printk: bootconsole [earlyser0] disabled
Jan 14 13:08:21.188584 kernel: ACPI: Core revision 20230628
Jan 14 13:08:21.188598 kernel: Failed to register legacy timer interrupt
Jan 14 13:08:21.188611 kernel: APIC: Switch to symmetric I/O mode setup
Jan 14 13:08:21.188628 kernel: Hyper-V: enabling crash_kexec_post_notifiers
Jan 14 13:08:21.188641 kernel: Hyper-V: Using IPI hypercalls
Jan 14 13:08:21.188655 kernel: APIC: send_IPI() replaced with hv_send_ipi()
Jan 14 13:08:21.188668 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask()
Jan 14 13:08:21.188682 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself()
Jan 14 13:08:21.188696 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself()
Jan 14 13:08:21.188709 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all()
Jan 14 13:08:21.188723 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self()
Jan 14 13:08:21.188736 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.80 BogoMIPS (lpj=2593904)
Jan 14 13:08:21.188753 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8
Jan 14 13:08:21.188767 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4
Jan 14 13:08:21.188780 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 14 13:08:21.188794 kernel: Spectre V2 : Mitigation: Retpolines
Jan 14 13:08:21.188807 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Jan 14 13:08:21.188820 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Jan 14 13:08:21.188834 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
Jan 14 13:08:21.188847 kernel: RETBleed: Vulnerable
Jan 14 13:08:21.188860 kernel: Speculative Store Bypass: Vulnerable
Jan 14 13:08:21.188873 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode
Jan 14 13:08:21.188889 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode
Jan 14 13:08:21.188902 kernel: GDS: Unknown: Dependent on hypervisor status
Jan 14 13:08:21.188915 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 14 13:08:21.188929 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 14 13:08:21.188942 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 14 13:08:21.188955 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask'
Jan 14 13:08:21.188969 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256'
Jan 14 13:08:21.189079 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256'
Jan 14 13:08:21.189093 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 14 13:08:21.189106 kernel: x86/fpu: xstate_offset[5]:  832, xstate_sizes[5]:   64
Jan 14 13:08:21.189119 kernel: x86/fpu: xstate_offset[6]:  896, xstate_sizes[6]:  512
Jan 14 13:08:21.189136 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024
Jan 14 13:08:21.189149 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format.
Jan 14 13:08:21.189163 kernel: Freeing SMP alternatives memory: 32K
Jan 14 13:08:21.189175 kernel: pid_max: default: 32768 minimum: 301
Jan 14 13:08:21.189188 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Jan 14 13:08:21.189209 kernel: landlock: Up and running.
Jan 14 13:08:21.189222 kernel: SELinux:  Initializing.
Jan 14 13:08:21.189235 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 14 13:08:21.189249 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 14 13:08:21.189266 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7)
Jan 14 13:08:21.189280 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Jan 14 13:08:21.189298 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Jan 14 13:08:21.189313 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Jan 14 13:08:21.189328 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only.
Jan 14 13:08:21.189343 kernel: signal: max sigframe size: 3632
Jan 14 13:08:21.189357 kernel: rcu: Hierarchical SRCU implementation.
Jan 14 13:08:21.189372 kernel: rcu:         Max phase no-delay instances is 400.
Jan 14 13:08:21.189386 kernel: NMI watchdog: Perf NMI watchdog permanently disabled
Jan 14 13:08:21.189401 kernel: smp: Bringing up secondary CPUs ...
Jan 14 13:08:21.189415 kernel: smpboot: x86: Booting SMP configuration:
Jan 14 13:08:21.189432 kernel: .... node  #0, CPUs:      #1
Jan 14 13:08:21.189447 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
Jan 14 13:08:21.189463 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
Jan 14 13:08:21.189477 kernel: smp: Brought up 1 node, 2 CPUs
Jan 14 13:08:21.189491 kernel: smpboot: Max logical packages: 1
Jan 14 13:08:21.189506 kernel: smpboot: Total of 2 processors activated (10375.61 BogoMIPS)
Jan 14 13:08:21.189520 kernel: devtmpfs: initialized
Jan 14 13:08:21.189534 kernel: x86/mm: Memory block size: 128MB
Jan 14 13:08:21.189548 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes)
Jan 14 13:08:21.189566 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 14 13:08:21.189580 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Jan 14 13:08:21.189595 kernel: pinctrl core: initialized pinctrl subsystem
Jan 14 13:08:21.189609 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 14 13:08:21.189623 kernel: audit: initializing netlink subsys (disabled)
Jan 14 13:08:21.189638 kernel: audit: type=2000 audit(1736860099.028:1): state=initialized audit_enabled=0 res=1
Jan 14 13:08:21.189652 kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 14 13:08:21.189667 kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 14 13:08:21.189683 kernel: cpuidle: using governor menu
Jan 14 13:08:21.189698 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 14 13:08:21.189712 kernel: dca service started, version 1.12.1
Jan 14 13:08:21.189726 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff]
Jan 14 13:08:21.189741 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 14 13:08:21.189755 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 14 13:08:21.189770 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 14 13:08:21.189784 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 14 13:08:21.189798 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 14 13:08:21.189815 kernel: ACPI: Added _OSI(Module Device)
Jan 14 13:08:21.189829 kernel: ACPI: Added _OSI(Processor Device)
Jan 14 13:08:21.189844 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Jan 14 13:08:21.189858 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 14 13:08:21.189872 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 14 13:08:21.189886 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Jan 14 13:08:21.189900 kernel: ACPI: Interpreter enabled
Jan 14 13:08:21.189915 kernel: ACPI: PM: (supports S0 S5)
Jan 14 13:08:21.189929 kernel: ACPI: Using IOAPIC for interrupt routing
Jan 14 13:08:21.189946 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 14 13:08:21.189961 kernel: PCI: Ignoring E820 reservations for host bridge windows
Jan 14 13:08:21.189989 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F
Jan 14 13:08:21.190008 kernel: iommu: Default domain type: Translated
Jan 14 13:08:21.190020 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 14 13:08:21.190031 kernel: efivars: Registered efivars operations
Jan 14 13:08:21.190043 kernel: PCI: Using ACPI for IRQ routing
Jan 14 13:08:21.190055 kernel: PCI: System does not support PCI
Jan 14 13:08:21.190067 kernel: vgaarb: loaded
Jan 14 13:08:21.190079 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page
Jan 14 13:08:21.190094 kernel: VFS: Disk quotas dquot_6.6.0
Jan 14 13:08:21.190103 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 14 13:08:21.190111 kernel: pnp: PnP ACPI init
Jan 14 13:08:21.190119 kernel: pnp: PnP ACPI: found 3 devices
Jan 14 13:08:21.190127 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 14 13:08:21.190135 kernel: NET: Registered PF_INET protocol family
Jan 14 13:08:21.193093 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 14 13:08:21.193109 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 14 13:08:21.193127 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 14 13:08:21.193139 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 14 13:08:21.193152 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear)
Jan 14 13:08:21.193164 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 14 13:08:21.193177 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 14 13:08:21.193189 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 14 13:08:21.193201 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 14 13:08:21.193214 kernel: NET: Registered PF_XDP protocol family
Jan 14 13:08:21.193228 kernel: PCI: CLS 0 bytes, default 64
Jan 14 13:08:21.193246 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 14 13:08:21.193261 kernel: software IO TLB: mapped [mem 0x000000003ae83000-0x000000003ee83000] (64MB)
Jan 14 13:08:21.193275 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer
Jan 14 13:08:21.193290 kernel: Initialise system trusted keyrings
Jan 14 13:08:21.193304 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0
Jan 14 13:08:21.193318 kernel: Key type asymmetric registered
Jan 14 13:08:21.193332 kernel: Asymmetric key parser 'x509' registered
Jan 14 13:08:21.193346 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
Jan 14 13:08:21.193361 kernel: io scheduler mq-deadline registered
Jan 14 13:08:21.193378 kernel: io scheduler kyber registered
Jan 14 13:08:21.193392 kernel: io scheduler bfq registered
Jan 14 13:08:21.193406 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Jan 14 13:08:21.193420 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 14 13:08:21.193434 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 14 13:08:21.193449 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A
Jan 14 13:08:21.193463 kernel: i8042: PNP: No PS/2 controller found.
Jan 14 13:08:21.193662 kernel: rtc_cmos 00:02: registered as rtc0
Jan 14 13:08:21.193792 kernel: rtc_cmos 00:02: setting system clock to 2025-01-14T13:08:20 UTC (1736860100)
Jan 14 13:08:21.193909 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram
Jan 14 13:08:21.193928 kernel: intel_pstate: CPU model not supported
Jan 14 13:08:21.193942 kernel: efifb: probing for efifb
Jan 14 13:08:21.193957 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k
Jan 14 13:08:21.193971 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1
Jan 14 13:08:21.194010 kernel: efifb: scrolling: redraw
Jan 14 13:08:21.194025 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0
Jan 14 13:08:21.194040 kernel: Console: switching to colour frame buffer device 128x48
Jan 14 13:08:21.194059 kernel: fb0: EFI VGA frame buffer device
Jan 14 13:08:21.194072 kernel: pstore: Using crash dump compression: deflate
Jan 14 13:08:21.194087 kernel: pstore: Registered efi_pstore as persistent store backend
Jan 14 13:08:21.194101 kernel: NET: Registered PF_INET6 protocol family
Jan 14 13:08:21.194115 kernel: Segment Routing with IPv6
Jan 14 13:08:21.194130 kernel: In-situ OAM (IOAM) with IPv6
Jan 14 13:08:21.194144 kernel: NET: Registered PF_PACKET protocol family
Jan 14 13:08:21.194159 kernel: Key type dns_resolver registered
Jan 14 13:08:21.194173 kernel: IPI shorthand broadcast: enabled
Jan 14 13:08:21.194191 kernel: sched_clock: Marking stable (997172000, 56560500)->(1323869200, -270136700)
Jan 14 13:08:21.194206 kernel: registered taskstats version 1
Jan 14 13:08:21.194220 kernel: Loading compiled-in X.509 certificates
Jan 14 13:08:21.194235 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: ede78b3e719729f95eaaf7cb6a5289b567f6ee3e'
Jan 14 13:08:21.194249 kernel: Key type .fscrypt registered
Jan 14 13:08:21.194263 kernel: Key type fscrypt-provisioning registered
Jan 14 13:08:21.194278 kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 14 13:08:21.194292 kernel: ima: Allocated hash algorithm: sha1
Jan 14 13:08:21.194307 kernel: ima: No architecture policies found
Jan 14 13:08:21.194324 kernel: clk: Disabling unused clocks
Jan 14 13:08:21.194338 kernel: Freeing unused kernel image (initmem) memory: 43320K
Jan 14 13:08:21.194353 kernel: Write protecting the kernel read-only data: 38912k
Jan 14 13:08:21.194367 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K
Jan 14 13:08:21.194381 kernel: Run /init as init process
Jan 14 13:08:21.194396 kernel:   with arguments:
Jan 14 13:08:21.194410 kernel:     /init
Jan 14 13:08:21.194424 kernel:   with environment:
Jan 14 13:08:21.194437 kernel:     HOME=/
Jan 14 13:08:21.194454 kernel:     TERM=linux
Jan 14 13:08:21.194468 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Jan 14 13:08:21.194487 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Jan 14 13:08:21.194505 systemd[1]: Detected virtualization microsoft.
Jan 14 13:08:21.194520 systemd[1]: Detected architecture x86-64.
Jan 14 13:08:21.194535 systemd[1]: Running in initrd.
Jan 14 13:08:21.194549 systemd[1]: No hostname configured, using default hostname.
Jan 14 13:08:21.194564 systemd[1]: Hostname set to <localhost>.
Jan 14 13:08:21.194582 systemd[1]: Initializing machine ID from random generator.
Jan 14 13:08:21.194597 systemd[1]: Queued start job for default target initrd.target.
Jan 14 13:08:21.194612 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 14 13:08:21.194627 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 14 13:08:21.194644 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Jan 14 13:08:21.194659 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Jan 14 13:08:21.194674 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Jan 14 13:08:21.194693 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Jan 14 13:08:21.194711 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Jan 14 13:08:21.194726 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Jan 14 13:08:21.194741 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 14 13:08:21.194757 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Jan 14 13:08:21.194772 systemd[1]: Reached target paths.target - Path Units.
Jan 14 13:08:21.194787 systemd[1]: Reached target slices.target - Slice Units.
Jan 14 13:08:21.194806 systemd[1]: Reached target swap.target - Swaps.
Jan 14 13:08:21.194821 systemd[1]: Reached target timers.target - Timer Units.
Jan 14 13:08:21.194836 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Jan 14 13:08:21.194851 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Jan 14 13:08:21.194866 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Jan 14 13:08:21.194881 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Jan 14 13:08:21.194896 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Jan 14 13:08:21.194911 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Jan 14 13:08:21.194926 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 14 13:08:21.194944 systemd[1]: Reached target sockets.target - Socket Units.
Jan 14 13:08:21.194959 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Jan 14 13:08:21.194988 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Jan 14 13:08:21.195005 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Jan 14 13:08:21.195020 systemd[1]: Starting systemd-fsck-usr.service...
Jan 14 13:08:21.195035 systemd[1]: Starting systemd-journald.service - Journal Service...
Jan 14 13:08:21.195050 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Jan 14 13:08:21.195095 systemd-journald[177]: Collecting audit messages is disabled.
Jan 14 13:08:21.195134 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 14 13:08:21.195150 systemd-journald[177]: Journal started
Jan 14 13:08:21.195190 systemd-journald[177]: Runtime Journal (/run/log/journal/43ec4e1ec3594933bd50acbfa026075d) is 8.0M, max 158.8M, 150.8M free.
Jan 14 13:08:21.186498 systemd-modules-load[178]: Inserted module 'overlay'
Jan 14 13:08:21.208647 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Jan 14 13:08:21.214020 systemd[1]: Started systemd-journald.service - Journal Service.
Jan 14 13:08:21.214527 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 14 13:08:21.215358 systemd[1]: Finished systemd-fsck-usr.service.
Jan 14 13:08:21.232359 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Jan 14 13:08:21.244813 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 14 13:08:21.250018 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Jan 14 13:08:21.253622 kernel: Bridge firewalling registered
Jan 14 13:08:21.253757 systemd-modules-load[178]: Inserted module 'br_netfilter'
Jan 14 13:08:21.260173 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Jan 14 13:08:21.263774 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 14 13:08:21.269361 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Jan 14 13:08:21.283895 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 14 13:08:21.293322 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 14 13:08:21.299153 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Jan 14 13:08:21.315219 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Jan 14 13:08:21.322669 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 14 13:08:21.340197 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Jan 14 13:08:21.343370 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Jan 14 13:08:21.349736 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 14 13:08:21.362396 dracut-cmdline[211]: dracut-dracut-053
Jan 14 13:08:21.362396 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5
Jan 14 13:08:21.381464 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Jan 14 13:08:21.422687 systemd-resolved[226]: Positive Trust Anchors:
Jan 14 13:08:21.425530 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Jan 14 13:08:21.429699 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Jan 14 13:08:21.451585 systemd-resolved[226]: Defaulting to hostname 'linux'.
Jan 14 13:08:21.456308 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Jan 14 13:08:21.464731 kernel: SCSI subsystem initialized
Jan 14 13:08:21.464931 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Jan 14 13:08:21.476998 kernel: Loading iSCSI transport class v2.0-870.
Jan 14 13:08:21.488005 kernel: iscsi: registered transport (tcp)
Jan 14 13:08:21.509228 kernel: iscsi: registered transport (qla4xxx)
Jan 14 13:08:21.509322 kernel: QLogic iSCSI HBA Driver
Jan 14 13:08:21.546270 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Jan 14 13:08:21.555183 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Jan 14 13:08:21.587353 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 14 13:08:21.587470 kernel: device-mapper: uevent: version 1.0.3
Jan 14 13:08:21.591761 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Jan 14 13:08:21.642012 kernel: raid6: avx512x4 gen() 13048 MB/s
Jan 14 13:08:21.660998 kernel: raid6: avx512x2 gen() 17925 MB/s
Jan 14 13:08:21.679990 kernel: raid6: avx512x1 gen() 17907 MB/s
Jan 14 13:08:21.699998 kernel: raid6: avx2x4   gen() 17994 MB/s
Jan 14 13:08:21.718988 kernel: raid6: avx2x2   gen() 17901 MB/s
Jan 14 13:08:21.739232 kernel: raid6: avx2x1   gen() 13816 MB/s
Jan 14 13:08:21.739315 kernel: raid6: using algorithm avx2x4 gen() 17994 MB/s
Jan 14 13:08:21.760951 kernel: raid6: .... xor() 6821 MB/s, rmw enabled
Jan 14 13:08:21.761031 kernel: raid6: using avx512x2 recovery algorithm
Jan 14 13:08:21.783003 kernel: xor: automatically using best checksumming function   avx       
Jan 14 13:08:21.924001 kernel: Btrfs loaded, zoned=no, fsverity=no
Jan 14 13:08:21.934035 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Jan 14 13:08:21.945193 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 14 13:08:21.959319 systemd-udevd[398]: Using default interface naming scheme 'v255'.
Jan 14 13:08:21.963753 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 14 13:08:21.979722 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Jan 14 13:08:21.997745 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation
Jan 14 13:08:22.029078 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Jan 14 13:08:22.036277 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Jan 14 13:08:22.080504 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 14 13:08:22.093180 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Jan 14 13:08:22.113836 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Jan 14 13:08:22.123245 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Jan 14 13:08:22.127438 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 14 13:08:22.131460 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Jan 14 13:08:22.148650 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Jan 14 13:08:22.175130 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Jan 14 13:08:22.192013 kernel: cryptd: max_cpu_qlen set to 1000
Jan 14 13:08:22.205055 kernel: hv_vmbus: Vmbus version:5.2
Jan 14 13:08:22.223012 kernel: hv_vmbus: registering driver hyperv_keyboard
Jan 14 13:08:22.234794 kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 14 13:08:22.234870 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 14 13:08:22.242002 kernel: PTP clock support registered
Jan 14 13:08:22.251445 kernel: hv_utils: Registering HyperV Utility Driver
Jan 14 13:08:22.251517 kernel: hv_vmbus: registering driver hv_utils
Jan 14 13:08:22.253469 kernel: hv_utils: Heartbeat IC version 3.0
Jan 14 13:08:22.258621 kernel: hv_utils: Shutdown IC version 3.2
Jan 14 13:08:22.258703 kernel: hv_utils: TimeSync IC version 4.0
Jan 14 13:08:22.256532 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Jan 14 13:08:22.942806 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0
Jan 14 13:08:22.256762 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 14 13:08:22.923019 systemd-resolved[226]: Clock change detected. Flushing caches.
Jan 14 13:08:22.929085 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 14 13:08:22.932510 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 14 13:08:22.932850 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Jan 14 13:08:22.936130 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Jan 14 13:08:22.967312 kernel: AVX2 version of gcm_enc/dec engaged.
Jan 14 13:08:22.972370 kernel: AES CTR mode by8 optimization enabled
Jan 14 13:08:22.971811 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 14 13:08:22.977853 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 14 13:08:22.978038 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Jan 14 13:08:22.997945 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 14 13:08:23.002971 kernel: hv_vmbus: registering driver hv_storvsc
Jan 14 13:08:23.007388 kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 14 13:08:23.014599 kernel: scsi host0: storvsc_host_t
Jan 14 13:08:23.014689 kernel: hv_vmbus: registering driver hv_netvsc
Jan 14 13:08:23.017570 kernel: scsi 0:0:0:0: Direct-Access     Msft     Virtual Disk     1.0  PQ: 0 ANSI: 5
Jan 14 13:08:23.023384 kernel: scsi host1: storvsc_host_t
Jan 14 13:08:23.023459 kernel: scsi 0:0:0:2: CD-ROM            Msft     Virtual DVD-ROM  1.0  PQ: 0 ANSI: 0
Jan 14 13:08:23.048820 kernel: hv_vmbus: registering driver hid_hyperv
Jan 14 13:08:23.055352 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1
Jan 14 13:08:23.056722 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 14 13:08:23.070458 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on 
Jan 14 13:08:23.076537 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 14 13:08:23.094366 kernel: sr 0:0:0:2: [sr0] scsi-1 drive
Jan 14 13:08:23.097135 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 14 13:08:23.097161 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0
Jan 14 13:08:23.112962 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB)
Jan 14 13:08:23.133928 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks
Jan 14 13:08:23.134125 kernel: sd 0:0:0:0: [sda] Write Protect is off
Jan 14 13:08:23.134451 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00
Jan 14 13:08:23.134646 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA
Jan 14 13:08:23.134818 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Jan 14 13:08:23.134840 kernel: sd 0:0:0:0: [sda] Attached SCSI disk
Jan 14 13:08:23.114017 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 14 13:08:23.188392 kernel: hv_netvsc 000d3ab1-8cea-000d-3ab1-8cea000d3ab1 eth0: VF slot 1 added
Jan 14 13:08:23.201489 kernel: hv_vmbus: registering driver hv_pci
Jan 14 13:08:23.201869 kernel: hv_pci f63442c2-e231-4b6b-af8f-fb0a7de4d113: PCI VMBus probing: Using version 0x10004
Jan 14 13:08:23.258262 kernel: hv_pci f63442c2-e231-4b6b-af8f-fb0a7de4d113: PCI host bridge to bus e231:00
Jan 14 13:08:23.258537 kernel: pci_bus e231:00: root bus resource [mem 0xfe0000000-0xfe00fffff window]
Jan 14 13:08:23.258779 kernel: pci_bus e231:00: No busn resource found for root bus, will use [bus 00-ff]
Jan 14 13:08:23.258988 kernel: pci e231:00:02.0: [15b3:1016] type 00 class 0x020000
Jan 14 13:08:23.259240 kernel: pci e231:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref]
Jan 14 13:08:23.259505 kernel: pci e231:00:02.0: enabling Extended Tags
Jan 14 13:08:23.259724 kernel: pci e231:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at e231:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link)
Jan 14 13:08:23.259925 kernel: pci_bus e231:00: busn_res: [bus 00-ff] end is updated to 00
Jan 14 13:08:23.260070 kernel: pci e231:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref]
Jan 14 13:08:23.429348 kernel: mlx5_core e231:00:02.0: enabling device (0000 -> 0002)
Jan 14 13:08:23.685475 kernel: mlx5_core e231:00:02.0: firmware version: 14.30.5000
Jan 14 13:08:23.685720 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (471)
Jan 14 13:08:23.685743 kernel: BTRFS: device fsid 7f507843-6957-466b-8fb7-5bee228b170a devid 1 transid 44 /dev/sda3 scanned by (udev-worker) (453)
Jan 14 13:08:23.685762 kernel: hv_netvsc 000d3ab1-8cea-000d-3ab1-8cea000d3ab1 eth0: VF registering: eth1
Jan 14 13:08:23.686391 kernel: mlx5_core e231:00:02.0 eth1: joined to eth0
Jan 14 13:08:23.686613 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Jan 14 13:08:23.686636 kernel: mlx5_core e231:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic)
Jan 14 13:08:23.548782 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM.
Jan 14 13:08:23.605309 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM.
Jan 14 13:08:23.632580 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A.
Jan 14 13:08:23.636482 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A.
Jan 14 13:08:23.646398 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT.
Jan 14 13:08:23.658500 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Jan 14 13:08:23.721321 kernel: mlx5_core e231:00:02.0 enP57905s1: renamed from eth1
Jan 14 13:08:24.692374 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Jan 14 13:08:24.694436 disk-uuid[608]: The operation has completed successfully.
Jan 14 13:08:24.777717 systemd[1]: disk-uuid.service: Deactivated successfully.
Jan 14 13:08:24.777858 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Jan 14 13:08:24.798524 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Jan 14 13:08:24.805347 sh[695]: Success
Jan 14 13:08:24.839376 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2"
Jan 14 13:08:25.029575 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Jan 14 13:08:25.049761 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Jan 14 13:08:25.057423 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Jan 14 13:08:25.077310 kernel: BTRFS info (device dm-0): first mount of filesystem 7f507843-6957-466b-8fb7-5bee228b170a
Jan 14 13:08:25.077409 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm
Jan 14 13:08:25.082157 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Jan 14 13:08:25.085425 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Jan 14 13:08:25.088359 kernel: BTRFS info (device dm-0): using free space tree
Jan 14 13:08:25.329956 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Jan 14 13:08:25.336711 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Jan 14 13:08:25.349497 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Jan 14 13:08:25.360509 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Jan 14 13:08:25.441222 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Jan 14 13:08:25.460574 kernel: BTRFS info (device sda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968
Jan 14 13:08:25.460602 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Jan 14 13:08:25.460614 kernel: BTRFS info (device sda6): using free space tree
Jan 14 13:08:25.461577 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Jan 14 13:08:25.475309 kernel: BTRFS info (device sda6): auto enabling async discard
Jan 14 13:08:25.486087 systemd-networkd[856]: lo: Link UP
Jan 14 13:08:25.486106 systemd-networkd[856]: lo: Gained carrier
Jan 14 13:08:25.488464 systemd-networkd[856]: Enumeration completed
Jan 14 13:08:25.503416 kernel: BTRFS info (device sda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968
Jan 14 13:08:25.488826 systemd[1]: Started systemd-networkd.service - Network Configuration.
Jan 14 13:08:25.490825 systemd-networkd[856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 14 13:08:25.490830 systemd-networkd[856]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Jan 14 13:08:25.491954 systemd[1]: Reached target network.target - Network.
Jan 14 13:08:25.493914 systemd[1]: mnt-oem.mount: Deactivated successfully.
Jan 14 13:08:25.525889 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Jan 14 13:08:25.546566 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Jan 14 13:08:25.576328 kernel: mlx5_core e231:00:02.0 enP57905s1: Link up
Jan 14 13:08:25.612410 kernel: hv_netvsc 000d3ab1-8cea-000d-3ab1-8cea000d3ab1 eth0: Data path switched to VF: enP57905s1
Jan 14 13:08:25.612671 systemd-networkd[856]: enP57905s1: Link UP
Jan 14 13:08:25.612804 systemd-networkd[856]: eth0: Link UP
Jan 14 13:08:25.613057 systemd-networkd[856]: eth0: Gained carrier
Jan 14 13:08:25.613067 systemd-networkd[856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 14 13:08:25.630689 systemd-networkd[856]: enP57905s1: Gained carrier
Jan 14 13:08:25.673387 systemd-networkd[856]: eth0: DHCPv4 address 10.200.8.19/24, gateway 10.200.8.1 acquired from 168.63.129.16
Jan 14 13:08:26.516859 ignition[880]: Ignition 2.20.0
Jan 14 13:08:26.516871 ignition[880]: Stage: fetch-offline
Jan 14 13:08:26.516915 ignition[880]: no configs at "/usr/lib/ignition/base.d"
Jan 14 13:08:26.516925 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Jan 14 13:08:26.517098 ignition[880]: parsed url from cmdline: ""
Jan 14 13:08:26.517104 ignition[880]: no config URL provided
Jan 14 13:08:26.517111 ignition[880]: reading system config file "/usr/lib/ignition/user.ign"
Jan 14 13:08:26.517124 ignition[880]: no config at "/usr/lib/ignition/user.ign"
Jan 14 13:08:26.517133 ignition[880]: failed to fetch config: resource requires networking
Jan 14 13:08:26.526541 ignition[880]: Ignition finished successfully
Jan 14 13:08:26.546262 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Jan 14 13:08:26.557518 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)...
Jan 14 13:08:26.573014 ignition[889]: Ignition 2.20.0
Jan 14 13:08:26.573028 ignition[889]: Stage: fetch
Jan 14 13:08:26.573260 ignition[889]: no configs at "/usr/lib/ignition/base.d"
Jan 14 13:08:26.573274 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Jan 14 13:08:26.573433 ignition[889]: parsed url from cmdline: ""
Jan 14 13:08:26.573437 ignition[889]: no config URL provided
Jan 14 13:08:26.573442 ignition[889]: reading system config file "/usr/lib/ignition/user.ign"
Jan 14 13:08:26.573450 ignition[889]: no config at "/usr/lib/ignition/user.ign"
Jan 14 13:08:26.575041 ignition[889]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1
Jan 14 13:08:26.656076 ignition[889]: GET result: OK
Jan 14 13:08:26.656149 ignition[889]: config has been read from IMDS userdata
Jan 14 13:08:26.656168 ignition[889]: parsing config with SHA512: 1c20d66efd75d25becb83fae6c95083142edcee757f6c0661090c2c64790d1628e61110a6aececf15c947e6cf946d85022216dc77bb0d9499e21f5ffb9706fce
Jan 14 13:08:26.661975 unknown[889]: fetched base config from "system"
Jan 14 13:08:26.661988 unknown[889]: fetched base config from "system"
Jan 14 13:08:26.662352 ignition[889]: fetch: fetch complete
Jan 14 13:08:26.661995 unknown[889]: fetched user config from "azure"
Jan 14 13:08:26.662357 ignition[889]: fetch: fetch passed
Jan 14 13:08:26.669730 systemd[1]: Finished ignition-fetch.service - Ignition (fetch).
Jan 14 13:08:26.662399 ignition[889]: Ignition finished successfully
Jan 14 13:08:26.686525 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Jan 14 13:08:26.701244 ignition[895]: Ignition 2.20.0
Jan 14 13:08:26.701257 ignition[895]: Stage: kargs
Jan 14 13:08:26.701488 ignition[895]: no configs at "/usr/lib/ignition/base.d"
Jan 14 13:08:26.701503 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Jan 14 13:08:26.712070 ignition[895]: kargs: kargs passed
Jan 14 13:08:26.712153 ignition[895]: Ignition finished successfully
Jan 14 13:08:26.717551 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Jan 14 13:08:26.728480 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Jan 14 13:08:26.741445 ignition[901]: Ignition 2.20.0
Jan 14 13:08:26.741457 ignition[901]: Stage: disks
Jan 14 13:08:26.745571 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Jan 14 13:08:26.741672 ignition[901]: no configs at "/usr/lib/ignition/base.d"
Jan 14 13:08:26.749551 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Jan 14 13:08:26.741685 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Jan 14 13:08:26.755425 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Jan 14 13:08:26.742365 ignition[901]: disks: disks passed
Jan 14 13:08:26.759033 systemd[1]: Reached target local-fs.target - Local File Systems.
Jan 14 13:08:26.742409 ignition[901]: Ignition finished successfully
Jan 14 13:08:26.779330 systemd[1]: Reached target sysinit.target - System Initialization.
Jan 14 13:08:26.779363 systemd-networkd[856]: enP57905s1: Gained IPv6LL
Jan 14 13:08:26.782319 systemd[1]: Reached target basic.target - Basic System.
Jan 14 13:08:26.797460 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Jan 14 13:08:26.842020 systemd-fsck[909]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks
Jan 14 13:08:26.848111 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Jan 14 13:08:26.859550 systemd[1]: Mounting sysroot.mount - /sysroot...
Jan 14 13:08:26.951310 kernel: EXT4-fs (sda9): mounted filesystem 59ba8ffc-e6b0-4bb4-a36e-13a47bd6ad99 r/w with ordered data mode. Quota mode: none.
Jan 14 13:08:26.951882 systemd[1]: Mounted sysroot.mount - /sysroot.
Jan 14 13:08:26.954804 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Jan 14 13:08:26.969397 systemd-networkd[856]: eth0: Gained IPv6LL
Jan 14 13:08:26.991396 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Jan 14 13:08:26.998484 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Jan 14 13:08:27.008129 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (920)
Jan 14 13:08:27.009000 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent...
Jan 14 13:08:27.015705 kernel: BTRFS info (device sda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968
Jan 14 13:08:27.024475 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Jan 14 13:08:27.024557 kernel: BTRFS info (device sda6): using free space tree
Jan 14 13:08:27.024241 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Jan 14 13:08:27.033574 kernel: BTRFS info (device sda6): auto enabling async discard
Jan 14 13:08:27.024313 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Jan 14 13:08:27.042060 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Jan 14 13:08:27.044945 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Jan 14 13:08:27.059485 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Jan 14 13:08:27.552083 coreos-metadata[922]: Jan 14 13:08:27.552 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1
Jan 14 13:08:27.557041 coreos-metadata[922]: Jan 14 13:08:27.555 INFO Fetch successful
Jan 14 13:08:27.557041 coreos-metadata[922]: Jan 14 13:08:27.555 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1
Jan 14 13:08:27.565744 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory
Jan 14 13:08:27.571332 coreos-metadata[922]: Jan 14 13:08:27.570 INFO Fetch successful
Jan 14 13:08:27.575397 coreos-metadata[922]: Jan 14 13:08:27.571 INFO wrote hostname ci-4186.1.0-a-6f4e4149be to /sysroot/etc/hostname
Jan 14 13:08:27.580945 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent.
Jan 14 13:08:27.597164 initrd-setup-root[957]: cut: /sysroot/etc/group: No such file or directory
Jan 14 13:08:27.604272 initrd-setup-root[964]: cut: /sysroot/etc/shadow: No such file or directory
Jan 14 13:08:27.621919 initrd-setup-root[971]: cut: /sysroot/etc/gshadow: No such file or directory
Jan 14 13:08:28.256039 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Jan 14 13:08:28.267425 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Jan 14 13:08:28.276458 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Jan 14 13:08:28.282784 kernel: BTRFS info (device sda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968
Jan 14 13:08:28.287597 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Jan 14 13:08:28.315111 ignition[1038]: INFO     : Ignition 2.20.0
Jan 14 13:08:28.315111 ignition[1038]: INFO     : Stage: mount
Jan 14 13:08:28.320336 ignition[1038]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 14 13:08:28.320336 ignition[1038]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/azure"
Jan 14 13:08:28.320336 ignition[1038]: INFO     : mount: mount passed
Jan 14 13:08:28.320336 ignition[1038]: INFO     : Ignition finished successfully
Jan 14 13:08:28.324456 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Jan 14 13:08:28.335040 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Jan 14 13:08:28.352503 systemd[1]: Starting ignition-files.service - Ignition (files)...
Jan 14 13:08:28.361567 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Jan 14 13:08:28.387318 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1050)
Jan 14 13:08:28.387378 kernel: BTRFS info (device sda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968
Jan 14 13:08:28.391308 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Jan 14 13:08:28.397178 kernel: BTRFS info (device sda6): using free space tree
Jan 14 13:08:28.404316 kernel: BTRFS info (device sda6): auto enabling async discard
Jan 14 13:08:28.404486 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Jan 14 13:08:28.427550 ignition[1067]: INFO     : Ignition 2.20.0
Jan 14 13:08:28.427550 ignition[1067]: INFO     : Stage: files
Jan 14 13:08:28.432805 ignition[1067]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 14 13:08:28.432805 ignition[1067]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/azure"
Jan 14 13:08:28.432805 ignition[1067]: DEBUG    : files: compiled without relabeling support, skipping
Jan 14 13:08:28.442281 ignition[1067]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Jan 14 13:08:28.442281 ignition[1067]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Jan 14 13:08:28.494899 ignition[1067]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Jan 14 13:08:28.499269 ignition[1067]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Jan 14 13:08:28.499269 ignition[1067]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Jan 14 13:08:28.495561 unknown[1067]: wrote ssh authorized keys file for user: core
Jan 14 13:08:28.522838 ignition[1067]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/home/core/install.sh"
Jan 14 13:08:28.528942 ignition[1067]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh"
Jan 14 13:08:28.528942 ignition[1067]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Jan 14 13:08:28.528942 ignition[1067]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Jan 14 13:08:28.528942 ignition[1067]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw"
Jan 14 13:08:28.528942 ignition[1067]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw"
Jan 14 13:08:28.528942 ignition[1067]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw"
Jan 14 13:08:28.528942 ignition[1067]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1
Jan 14 13:08:28.963050 ignition[1067]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET result: OK
Jan 14 13:08:29.439566 ignition[1067]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw"
Jan 14 13:08:29.446048 ignition[1067]: INFO     : files: createResultFile: createFiles: op(7): [started]  writing file "/sysroot/etc/.ignition-result.json"
Jan 14 13:08:29.451051 ignition[1067]: INFO     : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json"
Jan 14 13:08:29.451051 ignition[1067]: INFO     : files: files passed
Jan 14 13:08:29.451051 ignition[1067]: INFO     : Ignition finished successfully
Jan 14 13:08:29.457187 systemd[1]: Finished ignition-files.service - Ignition (files).
Jan 14 13:08:29.469635 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Jan 14 13:08:29.476723 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Jan 14 13:08:29.480278 systemd[1]: ignition-quench.service: Deactivated successfully.
Jan 14 13:08:29.482431 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Jan 14 13:08:29.498139 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Jan 14 13:08:29.498139 initrd-setup-root-after-ignition[1095]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Jan 14 13:08:29.507573 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Jan 14 13:08:29.503254 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Jan 14 13:08:29.511493 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Jan 14 13:08:29.529551 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Jan 14 13:08:29.562637 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 14 13:08:29.562753 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Jan 14 13:08:29.566085 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Jan 14 13:08:29.566176 systemd[1]: Reached target initrd.target - Initrd Default Target.
Jan 14 13:08:29.570563 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Jan 14 13:08:29.573843 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Jan 14 13:08:29.599417 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Jan 14 13:08:29.611467 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Jan 14 13:08:29.624261 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Jan 14 13:08:29.632089 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 14 13:08:29.638847 systemd[1]: Stopped target timers.target - Timer Units.
Jan 14 13:08:29.641463 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 14 13:08:29.641605 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Jan 14 13:08:29.653410 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Jan 14 13:08:29.656689 systemd[1]: Stopped target basic.target - Basic System.
Jan 14 13:08:29.664410 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Jan 14 13:08:29.670594 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Jan 14 13:08:29.677138 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Jan 14 13:08:29.683904 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Jan 14 13:08:29.689813 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Jan 14 13:08:29.693192 systemd[1]: Stopped target sysinit.target - System Initialization.
Jan 14 13:08:29.702035 systemd[1]: Stopped target local-fs.target - Local File Systems.
Jan 14 13:08:29.707676 systemd[1]: Stopped target swap.target - Swaps.
Jan 14 13:08:29.712669 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 14 13:08:29.712836 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Jan 14 13:08:29.719264 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Jan 14 13:08:29.724550 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 14 13:08:29.734066 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Jan 14 13:08:29.736604 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 14 13:08:29.740522 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 14 13:08:29.740667 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Jan 14 13:08:29.752896 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Jan 14 13:08:29.753098 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Jan 14 13:08:29.763815 systemd[1]: ignition-files.service: Deactivated successfully.
Jan 14 13:08:29.764011 systemd[1]: Stopped ignition-files.service - Ignition (files).
Jan 14 13:08:29.769693 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully.
Jan 14 13:08:29.769851 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent.
Jan 14 13:08:29.791567 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Jan 14 13:08:29.796959 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 14 13:08:29.799734 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 14 13:08:29.808371 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Jan 14 13:08:29.811219 ignition[1119]: INFO     : Ignition 2.20.0
Jan 14 13:08:29.811219 ignition[1119]: INFO     : Stage: umount
Jan 14 13:08:29.811219 ignition[1119]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 14 13:08:29.811219 ignition[1119]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/azure"
Jan 14 13:08:29.811219 ignition[1119]: INFO     : umount: umount passed
Jan 14 13:08:29.811219 ignition[1119]: INFO     : Ignition finished successfully
Jan 14 13:08:29.818061 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 14 13:08:29.818401 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 14 13:08:29.821936 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 14 13:08:29.822052 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Jan 14 13:08:29.851676 systemd[1]: ignition-mount.service: Deactivated successfully.
Jan 14 13:08:29.851796 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Jan 14 13:08:29.861250 systemd[1]: ignition-disks.service: Deactivated successfully.
Jan 14 13:08:29.861650 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Jan 14 13:08:29.867425 systemd[1]: ignition-kargs.service: Deactivated successfully.
Jan 14 13:08:29.870124 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Jan 14 13:08:29.878117 systemd[1]: ignition-fetch.service: Deactivated successfully.
Jan 14 13:08:29.878194 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch).
Jan 14 13:08:29.881264 systemd[1]: Stopped target network.target - Network.
Jan 14 13:08:29.883711 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Jan 14 13:08:29.883783 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Jan 14 13:08:29.889968 systemd[1]: Stopped target paths.target - Path Units.
Jan 14 13:08:29.890520 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 14 13:08:29.897702 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 14 13:08:29.904115 systemd[1]: Stopped target slices.target - Slice Units.
Jan 14 13:08:29.909370 systemd[1]: Stopped target sockets.target - Socket Units.
Jan 14 13:08:29.911900 systemd[1]: iscsid.socket: Deactivated successfully.
Jan 14 13:08:29.911963 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Jan 14 13:08:29.917064 systemd[1]: iscsiuio.socket: Deactivated successfully.
Jan 14 13:08:29.917124 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Jan 14 13:08:29.919958 systemd[1]: ignition-setup.service: Deactivated successfully.
Jan 14 13:08:29.920016 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Jan 14 13:08:29.930007 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Jan 14 13:08:29.930096 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Jan 14 13:08:29.941189 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Jan 14 13:08:29.941995 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Jan 14 13:08:29.950570 systemd-networkd[856]: eth0: DHCPv6 lease lost
Jan 14 13:08:29.952049 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Jan 14 13:08:29.953060 systemd[1]: systemd-networkd.service: Deactivated successfully.
Jan 14 13:08:29.953159 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Jan 14 13:08:29.957684 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 14 13:08:29.957772 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Jan 14 13:08:29.966918 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Jan 14 13:08:29.966961 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Jan 14 13:08:29.985517 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Jan 14 13:08:29.993803 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Jan 14 13:08:29.993904 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Jan 14 13:08:30.002632 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 14 13:08:30.006667 systemd[1]: systemd-resolved.service: Deactivated successfully.
Jan 14 13:08:30.006783 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Jan 14 13:08:30.031431 systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 14 13:08:30.031607 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 14 13:08:30.062788 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 14 13:08:30.062880 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Jan 14 13:08:30.069118 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 14 13:08:30.069172 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 14 13:08:30.080516 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 14 13:08:30.080607 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Jan 14 13:08:30.099541 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 14 13:08:30.099632 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Jan 14 13:08:30.113101 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Jan 14 13:08:30.120004 kernel: hv_netvsc 000d3ab1-8cea-000d-3ab1-8cea000d3ab1 eth0: Data path switched from VF: enP57905s1
Jan 14 13:08:30.113199 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 14 13:08:30.127512 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Jan 14 13:08:30.130601 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 14 13:08:30.130686 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Jan 14 13:08:30.134498 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 14 13:08:30.134577 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Jan 14 13:08:30.139900 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 14 13:08:30.139975 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 14 13:08:30.152524 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 14 13:08:30.152587 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 14 13:08:30.152905 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 14 13:08:30.152940 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Jan 14 13:08:30.153790 systemd[1]: network-cleanup.service: Deactivated successfully.
Jan 14 13:08:30.153894 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Jan 14 13:08:30.155021 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 14 13:08:30.155096 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Jan 14 13:08:31.293027 systemd[1]: sysroot-boot.service: Deactivated successfully.
Jan 14 13:08:31.293161 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Jan 14 13:08:31.297295 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Jan 14 13:08:31.300778 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Jan 14 13:08:31.301025 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Jan 14 13:08:31.317604 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Jan 14 13:08:31.329677 systemd[1]: Switching root.
Jan 14 13:08:31.545666 systemd-journald[177]: Journal stopped
Jan 14 13:08:21.186062 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 18:58:40 -00 2025
Jan 14 13:08:21.186693 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5
Jan 14 13:08:21.186709 kernel: BIOS-provided physical RAM map:
Jan 14 13:08:21.186720 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable
Jan 14 13:08:21.186730 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved
Jan 14 13:08:21.186740 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable
Jan 14 13:08:21.186754 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved
Jan 14 13:08:21.186765 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data
Jan 14 13:08:21.186780 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS
Jan 14 13:08:21.186791 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable
Jan 14 13:08:21.186802 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable
Jan 14 13:08:21.186813 kernel: printk: bootconsole [earlyser0] enabled
Jan 14 13:08:21.186824 kernel: NX (Execute Disable) protection: active
Jan 14 13:08:21.186836 kernel: APIC: Static calls initialized
Jan 14 13:08:21.186853 kernel: efi: EFI v2.7 by Microsoft
Jan 14 13:08:21.186866 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 RNG=0x3ffd1018 
Jan 14 13:08:21.186879 kernel: random: crng init done
Jan 14 13:08:21.186891 kernel: secureboot: Secure boot disabled
Jan 14 13:08:21.186904 kernel: SMBIOS 3.1.0 present.
Jan 14 13:08:21.186916 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024
Jan 14 13:08:21.186929 kernel: Hypervisor detected: Microsoft Hyper-V
Jan 14 13:08:21.186941 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2
Jan 14 13:08:21.186953 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0
Jan 14 13:08:21.186966 kernel: Hyper-V: Nested features: 0x1e0101
Jan 14 13:08:21.186994 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40
Jan 14 13:08:21.187006 kernel: Hyper-V: Using hypercall for remote TLB flush
Jan 14 13:08:21.187018 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns
Jan 14 13:08:21.187029 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns
Jan 14 13:08:21.187041 kernel: tsc: Marking TSC unstable due to running on Hyper-V
Jan 14 13:08:21.187054 kernel: tsc: Detected 2593.904 MHz processor
Jan 14 13:08:21.187066 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 14 13:08:21.187079 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 14 13:08:21.187091 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000
Jan 14 13:08:21.187107 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs
Jan 14 13:08:21.187120 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 14 13:08:21.187132 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved
Jan 14 13:08:21.187144 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000
Jan 14 13:08:21.187156 kernel: Using GB pages for direct mapping
Jan 14 13:08:21.187168 kernel: ACPI: Early table checksum verification disabled
Jan 14 13:08:21.187181 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL)
Jan 14 13:08:21.187198 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Jan 14 13:08:21.187212 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001)
Jan 14 13:08:21.187226 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01   00000001 MSFT 05000000)
Jan 14 13:08:21.187238 kernel: ACPI: FACS 0x000000003FFFE000 000040
Jan 14 13:08:21.187251 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Jan 14 13:08:21.187264 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001)
Jan 14 13:08:21.187276 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Jan 14 13:08:21.187292 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001)
Jan 14 13:08:21.187305 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001)
Jan 14 13:08:21.187318 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Jan 14 13:08:21.187331 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Jan 14 13:08:21.187344 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113]
Jan 14 13:08:21.187356 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183]
Jan 14 13:08:21.187369 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f]
Jan 14 13:08:21.187382 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063]
Jan 14 13:08:21.187395 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f]
Jan 14 13:08:21.187411 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027]
Jan 14 13:08:21.187424 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057]
Jan 14 13:08:21.187437 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf]
Jan 14 13:08:21.187449 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037]
Jan 14 13:08:21.187463 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033]
Jan 14 13:08:21.187475 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0
Jan 14 13:08:21.187488 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0
Jan 14 13:08:21.187501 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug
Jan 14 13:08:21.187514 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug
Jan 14 13:08:21.187529 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug
Jan 14 13:08:21.187542 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug
Jan 14 13:08:21.187555 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug
Jan 14 13:08:21.187567 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug
Jan 14 13:08:21.187580 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug
Jan 14 13:08:21.187593 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug
Jan 14 13:08:21.187605 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug
Jan 14 13:08:21.187619 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug
Jan 14 13:08:21.187634 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug
Jan 14 13:08:21.187647 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug
Jan 14 13:08:21.187660 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug
Jan 14 13:08:21.187672 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug
Jan 14 13:08:21.187685 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug
Jan 14 13:08:21.187698 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug
Jan 14 13:08:21.187711 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff]
Jan 14 13:08:21.187723 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff]
Jan 14 13:08:21.187736 kernel: Zone ranges:
Jan 14 13:08:21.187751 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 14 13:08:21.187764 kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 14 13:08:21.187777 kernel:   Normal   [mem 0x0000000100000000-0x00000002bfffffff]
Jan 14 13:08:21.187790 kernel: Movable zone start for each node
Jan 14 13:08:21.187802 kernel: Early memory node ranges
Jan 14 13:08:21.187815 kernel:   node   0: [mem 0x0000000000001000-0x000000000009ffff]
Jan 14 13:08:21.187828 kernel:   node   0: [mem 0x0000000000100000-0x000000003ff40fff]
Jan 14 13:08:21.187840 kernel:   node   0: [mem 0x000000003ffff000-0x000000003fffffff]
Jan 14 13:08:21.187853 kernel:   node   0: [mem 0x0000000100000000-0x00000002bfffffff]
Jan 14 13:08:21.187869 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff]
Jan 14 13:08:21.187881 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 14 13:08:21.187894 kernel: On node 0, zone DMA: 96 pages in unavailable ranges
Jan 14 13:08:21.187907 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges
Jan 14 13:08:21.187919 kernel: ACPI: PM-Timer IO Port: 0x408
Jan 14 13:08:21.187930 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
Jan 14 13:08:21.187942 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23
Jan 14 13:08:21.187954 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 14 13:08:21.187967 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 14 13:08:21.187995 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200
Jan 14 13:08:21.188007 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs
Jan 14 13:08:21.188019 kernel: [mem 0x40000000-0xffffffff] available for PCI devices
Jan 14 13:08:21.188046 kernel: Booting paravirtualized kernel on Hyper-V
Jan 14 13:08:21.188059 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 14 13:08:21.188071 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1
Jan 14 13:08:21.188082 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576
Jan 14 13:08:21.188095 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152
Jan 14 13:08:21.188107 kernel: pcpu-alloc: [0] 0 1 
Jan 14 13:08:21.188123 kernel: Hyper-V: PV spinlocks enabled
Jan 14 13:08:21.188140 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Jan 14 13:08:21.188154 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5
Jan 14 13:08:21.188168 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Jan 14 13:08:21.188180 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 14 13:08:21.188193 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 14 13:08:21.188206 kernel: Fallback order for Node 0: 0 
Jan 14 13:08:21.188220 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2062618
Jan 14 13:08:21.188238 kernel: Policy zone: Normal
Jan 14 13:08:21.188262 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 14 13:08:21.188277 kernel: software IO TLB: area num 2.
Jan 14 13:08:21.188294 kernel: Memory: 8075040K/8387460K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 312164K reserved, 0K cma-reserved)
Jan 14 13:08:21.188308 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Jan 14 13:08:21.188323 kernel: ftrace: allocating 37890 entries in 149 pages
Jan 14 13:08:21.188337 kernel: ftrace: allocated 149 pages with 4 groups
Jan 14 13:08:21.188351 kernel: Dynamic Preempt: voluntary
Jan 14 13:08:21.188365 kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 14 13:08:21.188380 kernel: rcu:         RCU event tracing is enabled.
Jan 14 13:08:21.188394 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Jan 14 13:08:21.188412 kernel:         Trampoline variant of Tasks RCU enabled.
Jan 14 13:08:21.188426 kernel:         Rude variant of Tasks RCU enabled.
Jan 14 13:08:21.188440 kernel:         Tracing variant of Tasks RCU enabled.
Jan 14 13:08:21.188454 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 14 13:08:21.188468 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Jan 14 13:08:21.188482 kernel: Using NULL legacy PIC
Jan 14 13:08:21.188498 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0
Jan 14 13:08:21.188512 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 14 13:08:21.188530 kernel: Console: colour dummy device 80x25
Jan 14 13:08:21.188543 kernel: printk: console [tty1] enabled
Jan 14 13:08:21.188557 kernel: printk: console [ttyS0] enabled
Jan 14 13:08:21.188571 kernel: printk: bootconsole [earlyser0] disabled
Jan 14 13:08:21.188584 kernel: ACPI: Core revision 20230628
Jan 14 13:08:21.188598 kernel: Failed to register legacy timer interrupt
Jan 14 13:08:21.188611 kernel: APIC: Switch to symmetric I/O mode setup
Jan 14 13:08:21.188628 kernel: Hyper-V: enabling crash_kexec_post_notifiers
Jan 14 13:08:21.188641 kernel: Hyper-V: Using IPI hypercalls
Jan 14 13:08:21.188655 kernel: APIC: send_IPI() replaced with hv_send_ipi()
Jan 14 13:08:21.188668 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask()
Jan 14 13:08:21.188682 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself()
Jan 14 13:08:21.188696 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself()
Jan 14 13:08:21.188709 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all()
Jan 14 13:08:21.188723 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self()
Jan 14 13:08:21.188736 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.80 BogoMIPS (lpj=2593904)
Jan 14 13:08:21.188753 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8
Jan 14 13:08:21.188767 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4
Jan 14 13:08:21.188780 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 14 13:08:21.188794 kernel: Spectre V2 : Mitigation: Retpolines
Jan 14 13:08:21.188807 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Jan 14 13:08:21.188820 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Jan 14 13:08:21.188834 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
Jan 14 13:08:21.188847 kernel: RETBleed: Vulnerable
Jan 14 13:08:21.188860 kernel: Speculative Store Bypass: Vulnerable
Jan 14 13:08:21.188873 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode
Jan 14 13:08:21.188889 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode
Jan 14 13:08:21.188902 kernel: GDS: Unknown: Dependent on hypervisor status
Jan 14 13:08:21.188915 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 14 13:08:21.188929 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 14 13:08:21.188942 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 14 13:08:21.188955 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask'
Jan 14 13:08:21.188969 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256'
Jan 14 13:08:21.189079 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256'
Jan 14 13:08:21.189093 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 14 13:08:21.189106 kernel: x86/fpu: xstate_offset[5]:  832, xstate_sizes[5]:   64
Jan 14 13:08:21.189119 kernel: x86/fpu: xstate_offset[6]:  896, xstate_sizes[6]:  512
Jan 14 13:08:21.189136 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024
Jan 14 13:08:21.189149 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format.
Jan 14 13:08:21.189163 kernel: Freeing SMP alternatives memory: 32K
Jan 14 13:08:21.189175 kernel: pid_max: default: 32768 minimum: 301
Jan 14 13:08:21.189188 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Jan 14 13:08:21.189209 kernel: landlock: Up and running.
Jan 14 13:08:21.189222 kernel: SELinux:  Initializing.
Jan 14 13:08:21.189235 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 14 13:08:21.189249 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 14 13:08:21.189266 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7)
Jan 14 13:08:21.189280 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Jan 14 13:08:21.189298 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Jan 14 13:08:21.189313 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Jan 14 13:08:21.189328 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only.
Jan 14 13:08:21.189343 kernel: signal: max sigframe size: 3632
Jan 14 13:08:21.189357 kernel: rcu: Hierarchical SRCU implementation.
Jan 14 13:08:21.189372 kernel: rcu:         Max phase no-delay instances is 400.
Jan 14 13:08:21.189386 kernel: NMI watchdog: Perf NMI watchdog permanently disabled
Jan 14 13:08:21.189401 kernel: smp: Bringing up secondary CPUs ...
Jan 14 13:08:21.189415 kernel: smpboot: x86: Booting SMP configuration:
Jan 14 13:08:21.189432 kernel: .... node  #0, CPUs:      #1
Jan 14 13:08:21.189447 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
Jan 14 13:08:21.189463 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
Jan 14 13:08:21.189477 kernel: smp: Brought up 1 node, 2 CPUs
Jan 14 13:08:21.189491 kernel: smpboot: Max logical packages: 1
Jan 14 13:08:21.189506 kernel: smpboot: Total of 2 processors activated (10375.61 BogoMIPS)
Jan 14 13:08:21.189520 kernel: devtmpfs: initialized
Jan 14 13:08:21.189534 kernel: x86/mm: Memory block size: 128MB
Jan 14 13:08:21.189548 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes)
Jan 14 13:08:21.189566 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 14 13:08:21.189580 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Jan 14 13:08:21.189595 kernel: pinctrl core: initialized pinctrl subsystem
Jan 14 13:08:21.189609 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 14 13:08:21.189623 kernel: audit: initializing netlink subsys (disabled)
Jan 14 13:08:21.189638 kernel: audit: type=2000 audit(1736860099.028:1): state=initialized audit_enabled=0 res=1
Jan 14 13:08:21.189652 kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 14 13:08:21.189667 kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 14 13:08:21.189683 kernel: cpuidle: using governor menu
Jan 14 13:08:21.189698 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 14 13:08:21.189712 kernel: dca service started, version 1.12.1
Jan 14 13:08:21.189726 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff]
Jan 14 13:08:21.189741 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 14 13:08:21.189755 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 14 13:08:21.189770 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 14 13:08:21.189784 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 14 13:08:21.189798 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 14 13:08:21.189815 kernel: ACPI: Added _OSI(Module Device)
Jan 14 13:08:21.189829 kernel: ACPI: Added _OSI(Processor Device)
Jan 14 13:08:21.189844 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Jan 14 13:08:21.189858 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 14 13:08:21.189872 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 14 13:08:21.189886 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Jan 14 13:08:21.189900 kernel: ACPI: Interpreter enabled
Jan 14 13:08:21.189915 kernel: ACPI: PM: (supports S0 S5)
Jan 14 13:08:21.189929 kernel: ACPI: Using IOAPIC for interrupt routing
Jan 14 13:08:21.189946 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 14 13:08:21.189961 kernel: PCI: Ignoring E820 reservations for host bridge windows
Jan 14 13:08:21.189989 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F
Jan 14 13:08:21.190008 kernel: iommu: Default domain type: Translated
Jan 14 13:08:21.190020 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 14 13:08:21.190031 kernel: efivars: Registered efivars operations
Jan 14 13:08:21.190043 kernel: PCI: Using ACPI for IRQ routing
Jan 14 13:08:21.190055 kernel: PCI: System does not support PCI
Jan 14 13:08:21.190067 kernel: vgaarb: loaded
Jan 14 13:08:21.190079 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page
Jan 14 13:08:21.190094 kernel: VFS: Disk quotas dquot_6.6.0
Jan 14 13:08:21.190103 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 14 13:08:21.190111 kernel: pnp: PnP ACPI init
Jan 14 13:08:21.190119 kernel: pnp: PnP ACPI: found 3 devices
Jan 14 13:08:21.190127 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 14 13:08:21.190135 kernel: NET: Registered PF_INET protocol family
Jan 14 13:08:21.193093 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 14 13:08:21.193109 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 14 13:08:21.193127 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 14 13:08:21.193139 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 14 13:08:21.193152 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear)
Jan 14 13:08:21.193164 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 14 13:08:21.193177 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 14 13:08:21.193189 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 14 13:08:21.193201 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 14 13:08:21.193214 kernel: NET: Registered PF_XDP protocol family
Jan 14 13:08:21.193228 kernel: PCI: CLS 0 bytes, default 64
Jan 14 13:08:21.193246 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 14 13:08:21.193261 kernel: software IO TLB: mapped [mem 0x000000003ae83000-0x000000003ee83000] (64MB)
Jan 14 13:08:21.193275 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer
Jan 14 13:08:21.193290 kernel: Initialise system trusted keyrings
Jan 14 13:08:21.193304 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0
Jan 14 13:08:21.193318 kernel: Key type asymmetric registered
Jan 14 13:08:21.193332 kernel: Asymmetric key parser 'x509' registered
Jan 14 13:08:21.193346 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
Jan 14 13:08:21.193361 kernel: io scheduler mq-deadline registered
Jan 14 13:08:21.193378 kernel: io scheduler kyber registered
Jan 14 13:08:21.193392 kernel: io scheduler bfq registered
Jan 14 13:08:21.193406 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Jan 14 13:08:21.193420 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 14 13:08:21.193434 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 14 13:08:21.193449 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A
Jan 14 13:08:21.193463 kernel: i8042: PNP: No PS/2 controller found.
Jan 14 13:08:21.193662 kernel: rtc_cmos 00:02: registered as rtc0
Jan 14 13:08:21.193792 kernel: rtc_cmos 00:02: setting system clock to 2025-01-14T13:08:20 UTC (1736860100)
Jan 14 13:08:21.193909 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram
Jan 14 13:08:21.193928 kernel: intel_pstate: CPU model not supported
Jan 14 13:08:21.193942 kernel: efifb: probing for efifb
Jan 14 13:08:21.193957 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k
Jan 14 13:08:21.193971 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1
Jan 14 13:08:21.194010 kernel: efifb: scrolling: redraw
Jan 14 13:08:21.194025 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0
Jan 14 13:08:21.194040 kernel: Console: switching to colour frame buffer device 128x48
Jan 14 13:08:21.194059 kernel: fb0: EFI VGA frame buffer device
Jan 14 13:08:21.194072 kernel: pstore: Using crash dump compression: deflate
Jan 14 13:08:21.194087 kernel: pstore: Registered efi_pstore as persistent store backend
Jan 14 13:08:21.194101 kernel: NET: Registered PF_INET6 protocol family
Jan 14 13:08:21.194115 kernel: Segment Routing with IPv6
Jan 14 13:08:21.194130 kernel: In-situ OAM (IOAM) with IPv6
Jan 14 13:08:21.194144 kernel: NET: Registered PF_PACKET protocol family
Jan 14 13:08:21.194159 kernel: Key type dns_resolver registered
Jan 14 13:08:21.194173 kernel: IPI shorthand broadcast: enabled
Jan 14 13:08:21.194191 kernel: sched_clock: Marking stable (997172000, 56560500)->(1323869200, -270136700)
Jan 14 13:08:21.194206 kernel: registered taskstats version 1
Jan 14 13:08:21.194220 kernel: Loading compiled-in X.509 certificates
Jan 14 13:08:21.194235 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: ede78b3e719729f95eaaf7cb6a5289b567f6ee3e'
Jan 14 13:08:21.194249 kernel: Key type .fscrypt registered
Jan 14 13:08:21.194263 kernel: Key type fscrypt-provisioning registered
Jan 14 13:08:21.194278 kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 14 13:08:21.194292 kernel: ima: Allocated hash algorithm: sha1
Jan 14 13:08:21.194307 kernel: ima: No architecture policies found
Jan 14 13:08:21.194324 kernel: clk: Disabling unused clocks
Jan 14 13:08:21.194338 kernel: Freeing unused kernel image (initmem) memory: 43320K
Jan 14 13:08:21.194353 kernel: Write protecting the kernel read-only data: 38912k
Jan 14 13:08:21.194367 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K
Jan 14 13:08:21.194381 kernel: Run /init as init process
Jan 14 13:08:21.194396 kernel:   with arguments:
Jan 14 13:08:21.194410 kernel:     /init
Jan 14 13:08:21.194424 kernel:   with environment:
Jan 14 13:08:21.194437 kernel:     HOME=/
Jan 14 13:08:21.194454 kernel:     TERM=linux
Jan 14 13:08:21.194468 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Jan 14 13:08:21.194487 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Jan 14 13:08:21.194505 systemd[1]: Detected virtualization microsoft.
Jan 14 13:08:21.194520 systemd[1]: Detected architecture x86-64.
Jan 14 13:08:21.194535 systemd[1]: Running in initrd.
Jan 14 13:08:21.194549 systemd[1]: No hostname configured, using default hostname.
Jan 14 13:08:21.194564 systemd[1]: Hostname set to <localhost>.
Jan 14 13:08:21.194582 systemd[1]: Initializing machine ID from random generator.
Jan 14 13:08:21.194597 systemd[1]: Queued start job for default target initrd.target.
Jan 14 13:08:21.194612 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 14 13:08:21.194627 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 14 13:08:21.194644 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Jan 14 13:08:21.194659 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Jan 14 13:08:21.194674 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Jan 14 13:08:21.194693 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Jan 14 13:08:21.194711 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Jan 14 13:08:21.194726 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Jan 14 13:08:21.194741 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 14 13:08:21.194757 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Jan 14 13:08:21.194772 systemd[1]: Reached target paths.target - Path Units.
Jan 14 13:08:21.194787 systemd[1]: Reached target slices.target - Slice Units.
Jan 14 13:08:21.194806 systemd[1]: Reached target swap.target - Swaps.
Jan 14 13:08:21.194821 systemd[1]: Reached target timers.target - Timer Units.
Jan 14 13:08:21.194836 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Jan 14 13:08:21.194851 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Jan 14 13:08:21.194866 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Jan 14 13:08:21.194881 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Jan 14 13:08:21.194896 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Jan 14 13:08:21.194911 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Jan 14 13:08:21.194926 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 14 13:08:21.194944 systemd[1]: Reached target sockets.target - Socket Units.
Jan 14 13:08:21.194959 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Jan 14 13:08:21.194988 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Jan 14 13:08:21.195005 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Jan 14 13:08:21.195020 systemd[1]: Starting systemd-fsck-usr.service...
Jan 14 13:08:21.195035 systemd[1]: Starting systemd-journald.service - Journal Service...
Jan 14 13:08:21.195050 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Jan 14 13:08:21.195095 systemd-journald[177]: Collecting audit messages is disabled.
Jan 14 13:08:21.195134 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 14 13:08:21.195150 systemd-journald[177]: Journal started
Jan 14 13:08:21.195190 systemd-journald[177]: Runtime Journal (/run/log/journal/43ec4e1ec3594933bd50acbfa026075d) is 8.0M, max 158.8M, 150.8M free.
Jan 14 13:08:21.186498 systemd-modules-load[178]: Inserted module 'overlay'
Jan 14 13:08:21.208647 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Jan 14 13:08:21.214020 systemd[1]: Started systemd-journald.service - Journal Service.
Jan 14 13:08:21.214527 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 14 13:08:21.215358 systemd[1]: Finished systemd-fsck-usr.service.
Jan 14 13:08:21.232359 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Jan 14 13:08:21.244813 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 14 13:08:21.250018 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Jan 14 13:08:21.253622 kernel: Bridge firewalling registered
Jan 14 13:08:21.253757 systemd-modules-load[178]: Inserted module 'br_netfilter'
Jan 14 13:08:21.260173 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Jan 14 13:08:21.263774 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 14 13:08:21.269361 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Jan 14 13:08:21.283895 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 14 13:08:21.293322 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 14 13:08:21.299153 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Jan 14 13:08:21.315219 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Jan 14 13:08:21.322669 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 14 13:08:21.340197 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Jan 14 13:08:21.343370 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Jan 14 13:08:21.349736 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 14 13:08:21.362396 dracut-cmdline[211]: dracut-dracut-053
Jan 14 13:08:21.362396 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5
Jan 14 13:08:21.381464 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Jan 14 13:08:21.422687 systemd-resolved[226]: Positive Trust Anchors:
Jan 14 13:08:21.425530 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Jan 14 13:08:21.429699 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Jan 14 13:08:21.451585 systemd-resolved[226]: Defaulting to hostname 'linux'.
Jan 14 13:08:21.456308 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Jan 14 13:08:21.464731 kernel: SCSI subsystem initialized
Jan 14 13:08:21.464931 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Jan 14 13:08:21.476998 kernel: Loading iSCSI transport class v2.0-870.
Jan 14 13:08:21.488005 kernel: iscsi: registered transport (tcp)
Jan 14 13:08:21.509228 kernel: iscsi: registered transport (qla4xxx)
Jan 14 13:08:21.509322 kernel: QLogic iSCSI HBA Driver
Jan 14 13:08:21.546270 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Jan 14 13:08:21.555183 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Jan 14 13:08:21.587353 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 14 13:08:21.587470 kernel: device-mapper: uevent: version 1.0.3
Jan 14 13:08:21.591761 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Jan 14 13:08:21.642012 kernel: raid6: avx512x4 gen() 13048 MB/s
Jan 14 13:08:21.660998 kernel: raid6: avx512x2 gen() 17925 MB/s
Jan 14 13:08:21.679990 kernel: raid6: avx512x1 gen() 17907 MB/s
Jan 14 13:08:21.699998 kernel: raid6: avx2x4   gen() 17994 MB/s
Jan 14 13:08:21.718988 kernel: raid6: avx2x2   gen() 17901 MB/s
Jan 14 13:08:21.739232 kernel: raid6: avx2x1   gen() 13816 MB/s
Jan 14 13:08:21.739315 kernel: raid6: using algorithm avx2x4 gen() 17994 MB/s
Jan 14 13:08:21.760951 kernel: raid6: .... xor() 6821 MB/s, rmw enabled
Jan 14 13:08:21.761031 kernel: raid6: using avx512x2 recovery algorithm
Jan 14 13:08:21.783003 kernel: xor: automatically using best checksumming function   avx       
Jan 14 13:08:21.924001 kernel: Btrfs loaded, zoned=no, fsverity=no
Jan 14 13:08:21.934035 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Jan 14 13:08:21.945193 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 14 13:08:21.959319 systemd-udevd[398]: Using default interface naming scheme 'v255'.
Jan 14 13:08:21.963753 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 14 13:08:21.979722 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Jan 14 13:08:21.997745 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation
Jan 14 13:08:22.029078 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Jan 14 13:08:22.036277 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Jan 14 13:08:22.080504 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 14 13:08:22.093180 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Jan 14 13:08:22.113836 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Jan 14 13:08:22.123245 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Jan 14 13:08:22.127438 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 14 13:08:22.131460 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Jan 14 13:08:22.148650 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Jan 14 13:08:22.175130 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Jan 14 13:08:22.192013 kernel: cryptd: max_cpu_qlen set to 1000
Jan 14 13:08:22.205055 kernel: hv_vmbus: Vmbus version:5.2
Jan 14 13:08:22.223012 kernel: hv_vmbus: registering driver hyperv_keyboard
Jan 14 13:08:22.234794 kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 14 13:08:22.234870 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 14 13:08:22.242002 kernel: PTP clock support registered
Jan 14 13:08:22.251445 kernel: hv_utils: Registering HyperV Utility Driver
Jan 14 13:08:22.251517 kernel: hv_vmbus: registering driver hv_utils
Jan 14 13:08:22.253469 kernel: hv_utils: Heartbeat IC version 3.0
Jan 14 13:08:22.258621 kernel: hv_utils: Shutdown IC version 3.2
Jan 14 13:08:22.258703 kernel: hv_utils: TimeSync IC version 4.0
Jan 14 13:08:22.256532 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Jan 14 13:08:22.942806 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0
Jan 14 13:08:22.256762 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 14 13:08:22.923019 systemd-resolved[226]: Clock change detected. Flushing caches.
Jan 14 13:08:22.929085 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 14 13:08:22.932510 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 14 13:08:22.932850 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Jan 14 13:08:22.936130 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Jan 14 13:08:22.967312 kernel: AVX2 version of gcm_enc/dec engaged.
Jan 14 13:08:22.972370 kernel: AES CTR mode by8 optimization enabled
Jan 14 13:08:22.971811 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 14 13:08:22.977853 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 14 13:08:22.978038 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Jan 14 13:08:22.997945 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 14 13:08:23.002971 kernel: hv_vmbus: registering driver hv_storvsc
Jan 14 13:08:23.007388 kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 14 13:08:23.014599 kernel: scsi host0: storvsc_host_t
Jan 14 13:08:23.014689 kernel: hv_vmbus: registering driver hv_netvsc
Jan 14 13:08:23.017570 kernel: scsi 0:0:0:0: Direct-Access     Msft     Virtual Disk     1.0  PQ: 0 ANSI: 5
Jan 14 13:08:23.023384 kernel: scsi host1: storvsc_host_t
Jan 14 13:08:23.023459 kernel: scsi 0:0:0:2: CD-ROM            Msft     Virtual DVD-ROM  1.0  PQ: 0 ANSI: 0
Jan 14 13:08:23.048820 kernel: hv_vmbus: registering driver hid_hyperv
Jan 14 13:08:23.055352 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1
Jan 14 13:08:23.056722 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 14 13:08:23.070458 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on 
Jan 14 13:08:23.076537 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 14 13:08:23.094366 kernel: sr 0:0:0:2: [sr0] scsi-1 drive
Jan 14 13:08:23.097135 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 14 13:08:23.097161 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0
Jan 14 13:08:23.112962 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB)
Jan 14 13:08:23.133928 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks
Jan 14 13:08:23.134125 kernel: sd 0:0:0:0: [sda] Write Protect is off
Jan 14 13:08:23.134451 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00
Jan 14 13:08:23.134646 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA
Jan 14 13:08:23.134818 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Jan 14 13:08:23.134840 kernel: sd 0:0:0:0: [sda] Attached SCSI disk
Jan 14 13:08:23.114017 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 14 13:08:23.188392 kernel: hv_netvsc 000d3ab1-8cea-000d-3ab1-8cea000d3ab1 eth0: VF slot 1 added
Jan 14 13:08:23.201489 kernel: hv_vmbus: registering driver hv_pci
Jan 14 13:08:23.201869 kernel: hv_pci f63442c2-e231-4b6b-af8f-fb0a7de4d113: PCI VMBus probing: Using version 0x10004
Jan 14 13:08:23.258262 kernel: hv_pci f63442c2-e231-4b6b-af8f-fb0a7de4d113: PCI host bridge to bus e231:00
Jan 14 13:08:23.258537 kernel: pci_bus e231:00: root bus resource [mem 0xfe0000000-0xfe00fffff window]
Jan 14 13:08:23.258779 kernel: pci_bus e231:00: No busn resource found for root bus, will use [bus 00-ff]
Jan 14 13:08:23.258988 kernel: pci e231:00:02.0: [15b3:1016] type 00 class 0x020000
Jan 14 13:08:23.259240 kernel: pci e231:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref]
Jan 14 13:08:23.259505 kernel: pci e231:00:02.0: enabling Extended Tags
Jan 14 13:08:23.259724 kernel: pci e231:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at e231:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link)
Jan 14 13:08:23.259925 kernel: pci_bus e231:00: busn_res: [bus 00-ff] end is updated to 00
Jan 14 13:08:23.260070 kernel: pci e231:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref]
Jan 14 13:08:23.429348 kernel: mlx5_core e231:00:02.0: enabling device (0000 -> 0002)
Jan 14 13:08:23.685475 kernel: mlx5_core e231:00:02.0: firmware version: 14.30.5000
Jan 14 13:08:23.685720 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (471)
Jan 14 13:08:23.685743 kernel: BTRFS: device fsid 7f507843-6957-466b-8fb7-5bee228b170a devid 1 transid 44 /dev/sda3 scanned by (udev-worker) (453)
Jan 14 13:08:23.685762 kernel: hv_netvsc 000d3ab1-8cea-000d-3ab1-8cea000d3ab1 eth0: VF registering: eth1
Jan 14 13:08:23.686391 kernel: mlx5_core e231:00:02.0 eth1: joined to eth0
Jan 14 13:08:23.686613 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Jan 14 13:08:23.686636 kernel: mlx5_core e231:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic)
Jan 14 13:08:23.548782 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM.
Jan 14 13:08:23.605309 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM.
Jan 14 13:08:23.632580 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A.
Jan 14 13:08:23.636482 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A.
Jan 14 13:08:23.646398 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT.
Jan 14 13:08:23.658500 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Jan 14 13:08:23.721321 kernel: mlx5_core e231:00:02.0 enP57905s1: renamed from eth1
Jan 14 13:08:24.692374 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Jan 14 13:08:24.694436 disk-uuid[608]: The operation has completed successfully.
Jan 14 13:08:24.777717 systemd[1]: disk-uuid.service: Deactivated successfully.
Jan 14 13:08:24.777858 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Jan 14 13:08:24.798524 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Jan 14 13:08:24.805347 sh[695]: Success
Jan 14 13:08:24.839376 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2"
Jan 14 13:08:25.029575 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Jan 14 13:08:25.049761 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Jan 14 13:08:25.057423 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Jan 14 13:08:25.077310 kernel: BTRFS info (device dm-0): first mount of filesystem 7f507843-6957-466b-8fb7-5bee228b170a
Jan 14 13:08:25.077409 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm
Jan 14 13:08:25.082157 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Jan 14 13:08:25.085425 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Jan 14 13:08:25.088359 kernel: BTRFS info (device dm-0): using free space tree
Jan 14 13:08:25.329956 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Jan 14 13:08:25.336711 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Jan 14 13:08:25.349497 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Jan 14 13:08:25.360509 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Jan 14 13:08:25.441222 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Jan 14 13:08:25.460574 kernel: BTRFS info (device sda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968
Jan 14 13:08:25.460602 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Jan 14 13:08:25.460614 kernel: BTRFS info (device sda6): using free space tree
Jan 14 13:08:25.461577 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Jan 14 13:08:25.475309 kernel: BTRFS info (device sda6): auto enabling async discard
Jan 14 13:08:25.486087 systemd-networkd[856]: lo: Link UP
Jan 14 13:08:25.486106 systemd-networkd[856]: lo: Gained carrier
Jan 14 13:08:25.488464 systemd-networkd[856]: Enumeration completed
Jan 14 13:08:25.503416 kernel: BTRFS info (device sda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968
Jan 14 13:08:25.488826 systemd[1]: Started systemd-networkd.service - Network Configuration.
Jan 14 13:08:25.490825 systemd-networkd[856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 14 13:08:25.490830 systemd-networkd[856]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Jan 14 13:08:25.491954 systemd[1]: Reached target network.target - Network.
Jan 14 13:08:25.493914 systemd[1]: mnt-oem.mount: Deactivated successfully.
Jan 14 13:08:25.525889 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Jan 14 13:08:25.546566 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Jan 14 13:08:25.576328 kernel: mlx5_core e231:00:02.0 enP57905s1: Link up
Jan 14 13:08:25.612410 kernel: hv_netvsc 000d3ab1-8cea-000d-3ab1-8cea000d3ab1 eth0: Data path switched to VF: enP57905s1
Jan 14 13:08:25.612671 systemd-networkd[856]: enP57905s1: Link UP
Jan 14 13:08:25.612804 systemd-networkd[856]: eth0: Link UP
Jan 14 13:08:25.613057 systemd-networkd[856]: eth0: Gained carrier
Jan 14 13:08:25.613067 systemd-networkd[856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 14 13:08:25.630689 systemd-networkd[856]: enP57905s1: Gained carrier
Jan 14 13:08:25.673387 systemd-networkd[856]: eth0: DHCPv4 address 10.200.8.19/24, gateway 10.200.8.1 acquired from 168.63.129.16
Jan 14 13:08:26.516859 ignition[880]: Ignition 2.20.0
Jan 14 13:08:26.516871 ignition[880]: Stage: fetch-offline
Jan 14 13:08:26.516915 ignition[880]: no configs at "/usr/lib/ignition/base.d"
Jan 14 13:08:26.516925 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Jan 14 13:08:26.517098 ignition[880]: parsed url from cmdline: ""
Jan 14 13:08:26.517104 ignition[880]: no config URL provided
Jan 14 13:08:26.517111 ignition[880]: reading system config file "/usr/lib/ignition/user.ign"
Jan 14 13:08:26.517124 ignition[880]: no config at "/usr/lib/ignition/user.ign"
Jan 14 13:08:26.517133 ignition[880]: failed to fetch config: resource requires networking
Jan 14 13:08:26.526541 ignition[880]: Ignition finished successfully
Jan 14 13:08:26.546262 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Jan 14 13:08:26.557518 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)...
Jan 14 13:08:26.573014 ignition[889]: Ignition 2.20.0
Jan 14 13:08:26.573028 ignition[889]: Stage: fetch
Jan 14 13:08:26.573260 ignition[889]: no configs at "/usr/lib/ignition/base.d"
Jan 14 13:08:26.573274 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Jan 14 13:08:26.573433 ignition[889]: parsed url from cmdline: ""
Jan 14 13:08:26.573437 ignition[889]: no config URL provided
Jan 14 13:08:26.573442 ignition[889]: reading system config file "/usr/lib/ignition/user.ign"
Jan 14 13:08:26.573450 ignition[889]: no config at "/usr/lib/ignition/user.ign"
Jan 14 13:08:26.575041 ignition[889]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1
Jan 14 13:08:26.656076 ignition[889]: GET result: OK
Jan 14 13:08:26.656149 ignition[889]: config has been read from IMDS userdata
Jan 14 13:08:26.656168 ignition[889]: parsing config with SHA512: 1c20d66efd75d25becb83fae6c95083142edcee757f6c0661090c2c64790d1628e61110a6aececf15c947e6cf946d85022216dc77bb0d9499e21f5ffb9706fce
Jan 14 13:08:26.661975 unknown[889]: fetched base config from "system"
Jan 14 13:08:26.661988 unknown[889]: fetched base config from "system"
Jan 14 13:08:26.662352 ignition[889]: fetch: fetch complete
Jan 14 13:08:26.661995 unknown[889]: fetched user config from "azure"
Jan 14 13:08:26.662357 ignition[889]: fetch: fetch passed
Jan 14 13:08:26.669730 systemd[1]: Finished ignition-fetch.service - Ignition (fetch).
Jan 14 13:08:26.662399 ignition[889]: Ignition finished successfully
Jan 14 13:08:26.686525 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Jan 14 13:08:26.701244 ignition[895]: Ignition 2.20.0
Jan 14 13:08:26.701257 ignition[895]: Stage: kargs
Jan 14 13:08:26.701488 ignition[895]: no configs at "/usr/lib/ignition/base.d"
Jan 14 13:08:26.701503 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Jan 14 13:08:26.712070 ignition[895]: kargs: kargs passed
Jan 14 13:08:26.712153 ignition[895]: Ignition finished successfully
Jan 14 13:08:26.717551 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Jan 14 13:08:26.728480 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Jan 14 13:08:26.741445 ignition[901]: Ignition 2.20.0
Jan 14 13:08:26.741457 ignition[901]: Stage: disks
Jan 14 13:08:26.745571 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Jan 14 13:08:26.741672 ignition[901]: no configs at "/usr/lib/ignition/base.d"
Jan 14 13:08:26.749551 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Jan 14 13:08:26.741685 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Jan 14 13:08:26.755425 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Jan 14 13:08:26.742365 ignition[901]: disks: disks passed
Jan 14 13:08:26.759033 systemd[1]: Reached target local-fs.target - Local File Systems.
Jan 14 13:08:26.742409 ignition[901]: Ignition finished successfully
Jan 14 13:08:26.779330 systemd[1]: Reached target sysinit.target - System Initialization.
Jan 14 13:08:26.779363 systemd-networkd[856]: enP57905s1: Gained IPv6LL
Jan 14 13:08:26.782319 systemd[1]: Reached target basic.target - Basic System.
Jan 14 13:08:26.797460 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Jan 14 13:08:26.842020 systemd-fsck[909]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks
Jan 14 13:08:26.848111 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Jan 14 13:08:26.859550 systemd[1]: Mounting sysroot.mount - /sysroot...
Jan 14 13:08:26.951310 kernel: EXT4-fs (sda9): mounted filesystem 59ba8ffc-e6b0-4bb4-a36e-13a47bd6ad99 r/w with ordered data mode. Quota mode: none.
Jan 14 13:08:26.951882 systemd[1]: Mounted sysroot.mount - /sysroot.
Jan 14 13:08:26.954804 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Jan 14 13:08:26.969397 systemd-networkd[856]: eth0: Gained IPv6LL
Jan 14 13:08:26.991396 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Jan 14 13:08:26.998484 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Jan 14 13:08:27.008129 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (920)
Jan 14 13:08:27.009000 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent...
Jan 14 13:08:27.015705 kernel: BTRFS info (device sda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968
Jan 14 13:08:27.024475 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Jan 14 13:08:27.024557 kernel: BTRFS info (device sda6): using free space tree
Jan 14 13:08:27.024241 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Jan 14 13:08:27.033574 kernel: BTRFS info (device sda6): auto enabling async discard
Jan 14 13:08:27.024313 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Jan 14 13:08:27.042060 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Jan 14 13:08:27.044945 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Jan 14 13:08:27.059485 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Jan 14 13:08:27.552083 coreos-metadata[922]: Jan 14 13:08:27.552 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1
Jan 14 13:08:27.557041 coreos-metadata[922]: Jan 14 13:08:27.555 INFO Fetch successful
Jan 14 13:08:27.557041 coreos-metadata[922]: Jan 14 13:08:27.555 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1
Jan 14 13:08:27.565744 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory
Jan 14 13:08:27.571332 coreos-metadata[922]: Jan 14 13:08:27.570 INFO Fetch successful
Jan 14 13:08:27.575397 coreos-metadata[922]: Jan 14 13:08:27.571 INFO wrote hostname ci-4186.1.0-a-6f4e4149be to /sysroot/etc/hostname
Jan 14 13:08:27.580945 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent.
Jan 14 13:08:27.597164 initrd-setup-root[957]: cut: /sysroot/etc/group: No such file or directory
Jan 14 13:08:27.604272 initrd-setup-root[964]: cut: /sysroot/etc/shadow: No such file or directory
Jan 14 13:08:27.621919 initrd-setup-root[971]: cut: /sysroot/etc/gshadow: No such file or directory
Jan 14 13:08:28.256039 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Jan 14 13:08:28.267425 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Jan 14 13:08:28.276458 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Jan 14 13:08:28.282784 kernel: BTRFS info (device sda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968
Jan 14 13:08:28.287597 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Jan 14 13:08:28.315111 ignition[1038]: INFO     : Ignition 2.20.0
Jan 14 13:08:28.315111 ignition[1038]: INFO     : Stage: mount
Jan 14 13:08:28.320336 ignition[1038]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 14 13:08:28.320336 ignition[1038]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/azure"
Jan 14 13:08:28.320336 ignition[1038]: INFO     : mount: mount passed
Jan 14 13:08:28.320336 ignition[1038]: INFO     : Ignition finished successfully
Jan 14 13:08:28.324456 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Jan 14 13:08:28.335040 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Jan 14 13:08:28.352503 systemd[1]: Starting ignition-files.service - Ignition (files)...
Jan 14 13:08:28.361567 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Jan 14 13:08:28.387318 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1050)
Jan 14 13:08:28.387378 kernel: BTRFS info (device sda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968
Jan 14 13:08:28.391308 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Jan 14 13:08:28.397178 kernel: BTRFS info (device sda6): using free space tree
Jan 14 13:08:28.404316 kernel: BTRFS info (device sda6): auto enabling async discard
Jan 14 13:08:28.404486 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Jan 14 13:08:28.427550 ignition[1067]: INFO     : Ignition 2.20.0
Jan 14 13:08:28.427550 ignition[1067]: INFO     : Stage: files
Jan 14 13:08:28.432805 ignition[1067]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 14 13:08:28.432805 ignition[1067]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/azure"
Jan 14 13:08:28.432805 ignition[1067]: DEBUG    : files: compiled without relabeling support, skipping
Jan 14 13:08:28.442281 ignition[1067]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Jan 14 13:08:28.442281 ignition[1067]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Jan 14 13:08:28.494899 ignition[1067]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Jan 14 13:08:28.499269 ignition[1067]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Jan 14 13:08:28.499269 ignition[1067]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Jan 14 13:08:28.495561 unknown[1067]: wrote ssh authorized keys file for user: core
Jan 14 13:08:28.522838 ignition[1067]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/home/core/install.sh"
Jan 14 13:08:28.528942 ignition[1067]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh"
Jan 14 13:08:28.528942 ignition[1067]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Jan 14 13:08:28.528942 ignition[1067]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Jan 14 13:08:28.528942 ignition[1067]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw"
Jan 14 13:08:28.528942 ignition[1067]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw"
Jan 14 13:08:28.528942 ignition[1067]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw"
Jan 14 13:08:28.528942 ignition[1067]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1
Jan 14 13:08:28.963050 ignition[1067]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET result: OK
Jan 14 13:08:29.439566 ignition[1067]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw"
Jan 14 13:08:29.446048 ignition[1067]: INFO     : files: createResultFile: createFiles: op(7): [started]  writing file "/sysroot/etc/.ignition-result.json"
Jan 14 13:08:29.451051 ignition[1067]: INFO     : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json"
Jan 14 13:08:29.451051 ignition[1067]: INFO     : files: files passed
Jan 14 13:08:29.451051 ignition[1067]: INFO     : Ignition finished successfully
Jan 14 13:08:29.457187 systemd[1]: Finished ignition-files.service - Ignition (files).
Jan 14 13:08:29.469635 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Jan 14 13:08:29.476723 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Jan 14 13:08:29.480278 systemd[1]: ignition-quench.service: Deactivated successfully.
Jan 14 13:08:29.482431 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Jan 14 13:08:29.498139 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Jan 14 13:08:29.498139 initrd-setup-root-after-ignition[1095]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Jan 14 13:08:29.507573 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Jan 14 13:08:29.503254 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Jan 14 13:08:29.511493 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Jan 14 13:08:29.529551 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Jan 14 13:08:29.562637 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 14 13:08:29.562753 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Jan 14 13:08:29.566085 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Jan 14 13:08:29.566176 systemd[1]: Reached target initrd.target - Initrd Default Target.
Jan 14 13:08:29.570563 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Jan 14 13:08:29.573843 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Jan 14 13:08:29.599417 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Jan 14 13:08:29.611467 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Jan 14 13:08:29.624261 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Jan 14 13:08:29.632089 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 14 13:08:29.638847 systemd[1]: Stopped target timers.target - Timer Units.
Jan 14 13:08:29.641463 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 14 13:08:29.641605 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Jan 14 13:08:29.653410 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Jan 14 13:08:29.656689 systemd[1]: Stopped target basic.target - Basic System.
Jan 14 13:08:29.664410 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Jan 14 13:08:29.670594 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Jan 14 13:08:29.677138 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Jan 14 13:08:29.683904 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Jan 14 13:08:29.689813 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Jan 14 13:08:29.693192 systemd[1]: Stopped target sysinit.target - System Initialization.
Jan 14 13:08:29.702035 systemd[1]: Stopped target local-fs.target - Local File Systems.
Jan 14 13:08:29.707676 systemd[1]: Stopped target swap.target - Swaps.
Jan 14 13:08:29.712669 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 14 13:08:29.712836 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Jan 14 13:08:29.719264 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Jan 14 13:08:29.724550 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 14 13:08:29.734066 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Jan 14 13:08:29.736604 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 14 13:08:29.740522 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 14 13:08:29.740667 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Jan 14 13:08:29.752896 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Jan 14 13:08:29.753098 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Jan 14 13:08:29.763815 systemd[1]: ignition-files.service: Deactivated successfully.
Jan 14 13:08:29.764011 systemd[1]: Stopped ignition-files.service - Ignition (files).
Jan 14 13:08:29.769693 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully.
Jan 14 13:08:29.769851 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent.
Jan 14 13:08:29.791567 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Jan 14 13:08:29.796959 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 14 13:08:29.799734 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 14 13:08:29.808371 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Jan 14 13:08:29.811219 ignition[1119]: INFO     : Ignition 2.20.0
Jan 14 13:08:29.811219 ignition[1119]: INFO     : Stage: umount
Jan 14 13:08:29.811219 ignition[1119]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 14 13:08:29.811219 ignition[1119]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/azure"
Jan 14 13:08:29.811219 ignition[1119]: INFO     : umount: umount passed
Jan 14 13:08:29.811219 ignition[1119]: INFO     : Ignition finished successfully
Jan 14 13:08:29.818061 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 14 13:08:29.818401 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 14 13:08:29.821936 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 14 13:08:29.822052 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Jan 14 13:08:29.851676 systemd[1]: ignition-mount.service: Deactivated successfully.
Jan 14 13:08:29.851796 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Jan 14 13:08:29.861250 systemd[1]: ignition-disks.service: Deactivated successfully.
Jan 14 13:08:29.861650 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Jan 14 13:08:29.867425 systemd[1]: ignition-kargs.service: Deactivated successfully.
Jan 14 13:08:29.870124 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Jan 14 13:08:29.878117 systemd[1]: ignition-fetch.service: Deactivated successfully.
Jan 14 13:08:29.878194 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch).
Jan 14 13:08:29.881264 systemd[1]: Stopped target network.target - Network.
Jan 14 13:08:29.883711 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Jan 14 13:08:29.883783 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Jan 14 13:08:29.889968 systemd[1]: Stopped target paths.target - Path Units.
Jan 14 13:08:29.890520 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 14 13:08:29.897702 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 14 13:08:29.904115 systemd[1]: Stopped target slices.target - Slice Units.
Jan 14 13:08:29.909370 systemd[1]: Stopped target sockets.target - Socket Units.
Jan 14 13:08:29.911900 systemd[1]: iscsid.socket: Deactivated successfully.
Jan 14 13:08:29.911963 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Jan 14 13:08:29.917064 systemd[1]: iscsiuio.socket: Deactivated successfully.
Jan 14 13:08:29.917124 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Jan 14 13:08:29.919958 systemd[1]: ignition-setup.service: Deactivated successfully.
Jan 14 13:08:29.920016 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Jan 14 13:08:29.930007 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Jan 14 13:08:29.930096 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Jan 14 13:08:29.941189 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Jan 14 13:08:29.941995 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Jan 14 13:08:29.950570 systemd-networkd[856]: eth0: DHCPv6 lease lost
Jan 14 13:08:29.952049 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Jan 14 13:08:29.953060 systemd[1]: systemd-networkd.service: Deactivated successfully.
Jan 14 13:08:29.953159 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Jan 14 13:08:29.957684 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 14 13:08:29.957772 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Jan 14 13:08:29.966918 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Jan 14 13:08:29.966961 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Jan 14 13:08:29.985517 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Jan 14 13:08:29.993803 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Jan 14 13:08:29.993904 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Jan 14 13:08:30.002632 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 14 13:08:30.006667 systemd[1]: systemd-resolved.service: Deactivated successfully.
Jan 14 13:08:30.006783 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Jan 14 13:08:30.031431 systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 14 13:08:30.031607 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 14 13:08:30.062788 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 14 13:08:30.062880 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Jan 14 13:08:30.069118 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 14 13:08:30.069172 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 14 13:08:30.080516 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 14 13:08:30.080607 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Jan 14 13:08:30.099541 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 14 13:08:30.099632 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Jan 14 13:08:30.113101 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Jan 14 13:08:30.120004 kernel: hv_netvsc 000d3ab1-8cea-000d-3ab1-8cea000d3ab1 eth0: Data path switched from VF: enP57905s1
Jan 14 13:08:30.113199 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 14 13:08:30.127512 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Jan 14 13:08:30.130601 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 14 13:08:30.130686 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Jan 14 13:08:30.134498 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 14 13:08:30.134577 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Jan 14 13:08:30.139900 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 14 13:08:30.139975 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 14 13:08:30.152524 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 14 13:08:30.152587 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 14 13:08:30.152905 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 14 13:08:30.152940 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Jan 14 13:08:30.153790 systemd[1]: network-cleanup.service: Deactivated successfully.
Jan 14 13:08:30.153894 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Jan 14 13:08:30.155021 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 14 13:08:30.155096 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Jan 14 13:08:31.293027 systemd[1]: sysroot-boot.service: Deactivated successfully.
Jan 14 13:08:31.293161 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Jan 14 13:08:31.297295 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Jan 14 13:08:31.300778 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Jan 14 13:08:31.301025 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Jan 14 13:08:31.317604 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Jan 14 13:08:31.329677 systemd[1]: Switching root.
Jan 14 13:08:31.545666 systemd-journald[177]: Journal stopped
Jan 14 13:08:37.300927 systemd-journald[177]: Received SIGTERM from PID 1 (systemd).
Jan 14 13:08:37.300970 kernel: SELinux:  policy capability network_peer_controls=1
Jan 14 13:08:37.300988 kernel: SELinux:  policy capability open_perms=1
Jan 14 13:08:37.301002 kernel: SELinux:  policy capability extended_socket_class=1
Jan 14 13:08:37.301016 kernel: SELinux:  policy capability always_check_network=0
Jan 14 13:08:37.301029 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 14 13:08:37.301045 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 14 13:08:37.301062 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Jan 14 13:08:37.301077 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Jan 14 13:08:37.301091 kernel: audit: type=1403 audit(1736860114.880:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 14 13:08:37.301107 systemd[1]: Successfully loaded SELinux policy in 106.974ms.
Jan 14 13:08:37.301124 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.390ms.
Jan 14 13:08:37.301141 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Jan 14 13:08:37.301157 systemd[1]: Detected virtualization microsoft.
Jan 14 13:08:37.301177 systemd[1]: Detected architecture x86-64.
Jan 14 13:08:37.301193 systemd[1]: Detected first boot.
Jan 14 13:08:37.301210 systemd[1]: Hostname set to <ci-4186.1.0-a-6f4e4149be>.
Jan 14 13:08:37.301226 systemd[1]: Initializing machine ID from random generator.
Jan 14 13:08:37.301242 zram_generator::config[1162]: No configuration found.
Jan 14 13:08:37.301262 systemd[1]: Populated /etc with preset unit settings.
Jan 14 13:08:37.301278 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 14 13:08:37.312015 systemd[1]: Stopped initrd-switch-root.service - Switch Root.
Jan 14 13:08:37.312047 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 14 13:08:37.312066 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Jan 14 13:08:37.312083 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Jan 14 13:08:37.312102 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Jan 14 13:08:37.312125 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Jan 14 13:08:37.312143 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Jan 14 13:08:37.312161 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Jan 14 13:08:37.312178 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Jan 14 13:08:37.312194 systemd[1]: Created slice user.slice - User and Session Slice.
Jan 14 13:08:37.312212 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 14 13:08:37.312229 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 14 13:08:37.312246 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Jan 14 13:08:37.312265 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Jan 14 13:08:37.312282 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Jan 14 13:08:37.312311 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Jan 14 13:08:37.312328 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0...
Jan 14 13:08:37.312345 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 14 13:08:37.312364 systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
Jan 14 13:08:37.312387 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
Jan 14 13:08:37.312404 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
Jan 14 13:08:37.312425 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Jan 14 13:08:37.312442 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 14 13:08:37.312459 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Jan 14 13:08:37.312477 systemd[1]: Reached target slices.target - Slice Units.
Jan 14 13:08:37.312494 systemd[1]: Reached target swap.target - Swaps.
Jan 14 13:08:37.312511 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Jan 14 13:08:37.312528 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Jan 14 13:08:37.312545 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Jan 14 13:08:37.312566 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Jan 14 13:08:37.312584 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 14 13:08:37.312601 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Jan 14 13:08:37.312619 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Jan 14 13:08:37.312640 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Jan 14 13:08:37.312657 systemd[1]: Mounting media.mount - External Media Directory...
Jan 14 13:08:37.312675 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jan 14 13:08:37.312692 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Jan 14 13:08:37.312710 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Jan 14 13:08:37.312728 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Jan 14 13:08:37.312747 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 14 13:08:37.312764 systemd[1]: Reached target machines.target - Containers.
Jan 14 13:08:37.312785 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Jan 14 13:08:37.312803 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 14 13:08:37.312820 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Jan 14 13:08:37.312838 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Jan 14 13:08:37.312855 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 14 13:08:37.312872 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Jan 14 13:08:37.312890 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 14 13:08:37.312907 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Jan 14 13:08:37.312925 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 14 13:08:37.312946 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Jan 14 13:08:37.312963 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 14 13:08:37.312981 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
Jan 14 13:08:37.312998 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Jan 14 13:08:37.313015 systemd[1]: Stopped systemd-fsck-usr.service.
Jan 14 13:08:37.313033 systemd[1]: Starting systemd-journald.service - Journal Service...
Jan 14 13:08:37.313050 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Jan 14 13:08:37.313067 kernel: loop: module loaded
Jan 14 13:08:37.313087 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Jan 14 13:08:37.313104 kernel: fuse: init (API version 7.39)
Jan 14 13:08:37.313121 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Jan 14 13:08:37.313173 systemd-journald[1268]: Collecting audit messages is disabled.
Jan 14 13:08:37.313215 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Jan 14 13:08:37.313233 systemd[1]: verity-setup.service: Deactivated successfully.
Jan 14 13:08:37.313251 systemd[1]: Stopped verity-setup.service.
Jan 14 13:08:37.313269 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jan 14 13:08:37.313296 systemd-journald[1268]: Journal started
Jan 14 13:08:37.313331 systemd-journald[1268]: Runtime Journal (/run/log/journal/c258052c851745b8af22094b47727c7e) is 8.0M, max 158.8M, 150.8M free.
Jan 14 13:08:37.320631 kernel: ACPI: bus type drm_connector registered
Jan 14 13:08:36.589344 systemd[1]: Queued start job for default target multi-user.target.
Jan 14 13:08:36.699183 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6.
Jan 14 13:08:36.699625 systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 14 13:08:37.324353 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Jan 14 13:08:37.331454 systemd[1]: Started systemd-journald.service - Journal Service.
Jan 14 13:08:37.334037 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Jan 14 13:08:37.337192 systemd[1]: Mounted media.mount - External Media Directory.
Jan 14 13:08:37.339925 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Jan 14 13:08:37.342967 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Jan 14 13:08:37.346223 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Jan 14 13:08:37.349508 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Jan 14 13:08:37.353455 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 14 13:08:37.357605 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 14 13:08:37.358147 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Jan 14 13:08:37.362169 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 14 13:08:37.362830 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 14 13:08:37.367149 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 14 13:08:37.367568 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Jan 14 13:08:37.371430 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 14 13:08:37.371683 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 14 13:08:37.375971 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 14 13:08:37.376277 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Jan 14 13:08:37.380262 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 14 13:08:37.380696 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 14 13:08:37.384358 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Jan 14 13:08:37.388165 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Jan 14 13:08:37.393338 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Jan 14 13:08:37.412092 systemd[1]: Reached target network-pre.target - Preparation for Network.
Jan 14 13:08:37.422399 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Jan 14 13:08:37.428442 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Jan 14 13:08:37.431966 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Jan 14 13:08:37.432021 systemd[1]: Reached target local-fs.target - Local File Systems.
Jan 14 13:08:37.436429 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink).
Jan 14 13:08:37.444845 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Jan 14 13:08:37.451537 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Jan 14 13:08:37.454798 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 14 13:08:37.484494 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Jan 14 13:08:37.489144 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Jan 14 13:08:37.492667 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 14 13:08:37.496163 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Jan 14 13:08:37.500454 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 14 13:08:37.502499 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Jan 14 13:08:37.516467 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Jan 14 13:08:37.522588 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Jan 14 13:08:37.530316 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 14 13:08:37.534449 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Jan 14 13:08:37.538510 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Jan 14 13:08:37.544541 systemd-journald[1268]: Time spent on flushing to /var/log/journal/c258052c851745b8af22094b47727c7e is 40.530ms for 941 entries.
Jan 14 13:08:37.544541 systemd-journald[1268]: System Journal (/var/log/journal/c258052c851745b8af22094b47727c7e) is 8.0M, max 2.6G, 2.6G free.
Jan 14 13:08:37.622555 systemd-journald[1268]: Received client request to flush runtime journal.
Jan 14 13:08:37.622616 kernel: loop0: detected capacity change from 0 to 141000
Jan 14 13:08:37.550611 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Jan 14 13:08:37.554821 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Jan 14 13:08:37.567196 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Jan 14 13:08:37.581465 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Jan 14 13:08:37.593493 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Jan 14 13:08:37.606824 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Jan 14 13:08:37.625655 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Jan 14 13:08:37.637650 udevadm[1309]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Jan 14 13:08:37.669615 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Jan 14 13:08:37.670317 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Jan 14 13:08:37.770476 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Jan 14 13:08:37.783212 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Jan 14 13:08:37.850796 systemd-tmpfiles[1315]: ACLs are not supported, ignoring.
Jan 14 13:08:37.850825 systemd-tmpfiles[1315]: ACLs are not supported, ignoring.
Jan 14 13:08:37.858318 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 14 13:08:37.942319 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Jan 14 13:08:37.968316 kernel: loop1: detected capacity change from 0 to 210664
Jan 14 13:08:38.013329 kernel: loop2: detected capacity change from 0 to 138184
Jan 14 13:08:38.549618 kernel: loop3: detected capacity change from 0 to 28304
Jan 14 13:08:38.881862 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Jan 14 13:08:38.891877 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 14 13:08:38.916095 systemd-udevd[1323]: Using default interface naming scheme 'v255'.
Jan 14 13:08:39.154354 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 14 13:08:39.170314 kernel: loop4: detected capacity change from 0 to 141000
Jan 14 13:08:39.174627 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Jan 14 13:08:39.207311 kernel: loop5: detected capacity change from 0 to 210664
Jan 14 13:08:39.234312 kernel: loop6: detected capacity change from 0 to 138184
Jan 14 13:08:39.246515 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Jan 14 13:08:39.266625 kernel: loop7: detected capacity change from 0 to 28304
Jan 14 13:08:39.270547 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped.
Jan 14 13:08:39.277933 (sd-merge)[1333]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'.
Jan 14 13:08:39.280397 (sd-merge)[1333]: Merged extensions into '/usr'.
Jan 14 13:08:39.288992 systemd[1]: Reloading requested from client PID 1298 ('systemd-sysext') (unit systemd-sysext.service)...
Jan 14 13:08:39.289018 systemd[1]: Reloading...
Jan 14 13:08:39.413550 zram_generator::config[1380]: No configuration found.
Jan 14 13:08:39.608999 kernel: hv_vmbus: registering driver hv_balloon
Jan 14 13:08:39.609128 kernel: hv_vmbus: registering driver hyperv_fb
Jan 14 13:08:39.614316 kernel: hyperv_fb: Synthvid Version major 3, minor 5
Jan 14 13:08:39.620721 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608
Jan 14 13:08:39.620805 kernel: mousedev: PS/2 mouse device common for all mice
Jan 14 13:08:39.620823 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0
Jan 14 13:08:39.629646 kernel: Console: switching to colour dummy device 80x25
Jan 14 13:08:39.632310 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1337)
Jan 14 13:08:39.649848 systemd-networkd[1336]: lo: Link UP
Jan 14 13:08:39.650147 systemd-networkd[1336]: lo: Gained carrier
Jan 14 13:08:39.675687 kernel: Console: switching to colour frame buffer device 128x48
Jan 14 13:08:39.692465 systemd-networkd[1336]: Enumeration completed
Jan 14 13:08:39.692981 systemd-networkd[1336]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 14 13:08:39.692992 systemd-networkd[1336]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Jan 14 13:08:39.815319 kernel: mlx5_core e231:00:02.0 enP57905s1: Link up
Jan 14 13:08:39.870310 kernel: hv_netvsc 000d3ab1-8cea-000d-3ab1-8cea000d3ab1 eth0: Data path switched to VF: enP57905s1
Jan 14 13:08:39.879232 systemd-networkd[1336]: enP57905s1: Link UP
Jan 14 13:08:39.879415 systemd-networkd[1336]: eth0: Link UP
Jan 14 13:08:39.879420 systemd-networkd[1336]: eth0: Gained carrier
Jan 14 13:08:39.879443 systemd-networkd[1336]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 14 13:08:39.885673 systemd-networkd[1336]: enP57905s1: Gained carrier
Jan 14 13:08:39.909862 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 14 13:08:39.961563 systemd-networkd[1336]: eth0: DHCPv4 address 10.200.8.19/24, gateway 10.200.8.1 acquired from 168.63.129.16
Jan 14 13:08:40.038331 kernel: kvm_intel: Using Hyper-V Enlightened VMCS
Jan 14 13:08:40.106437 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM.
Jan 14 13:08:40.111161 systemd[1]: Reloading finished in 821 ms.
Jan 14 13:08:40.137543 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Jan 14 13:08:40.141198 systemd[1]: Started systemd-networkd.service - Network Configuration.
Jan 14 13:08:40.144958 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Jan 14 13:08:40.195901 systemd[1]: Starting ensure-sysext.service...
Jan 14 13:08:40.206521 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Jan 14 13:08:40.213504 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Jan 14 13:08:40.228489 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Jan 14 13:08:40.235514 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 14 13:08:40.246654 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Jan 14 13:08:40.247474 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Jan 14 13:08:40.248871 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Jan 14 13:08:40.249500 systemd-tmpfiles[1518]: ACLs are not supported, ignoring.
Jan 14 13:08:40.249678 systemd-tmpfiles[1518]: ACLs are not supported, ignoring.
Jan 14 13:08:40.255511 systemd[1]: Reloading requested from client PID 1514 ('systemctl') (unit ensure-sysext.service)...
Jan 14 13:08:40.255525 systemd[1]: Reloading...
Jan 14 13:08:40.276019 systemd-tmpfiles[1518]: Detected autofs mount point /boot during canonicalization of boot.
Jan 14 13:08:40.276038 systemd-tmpfiles[1518]: Skipping /boot
Jan 14 13:08:40.299932 systemd-tmpfiles[1518]: Detected autofs mount point /boot during canonicalization of boot.
Jan 14 13:08:40.300477 systemd-tmpfiles[1518]: Skipping /boot
Jan 14 13:08:40.370323 zram_generator::config[1555]: No configuration found.
Jan 14 13:08:40.512712 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 14 13:08:40.597592 systemd[1]: Reloading finished in 341 ms.
Jan 14 13:08:40.623973 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Jan 14 13:08:40.628896 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 14 13:08:40.646657 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jan 14 13:08:40.651957 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Jan 14 13:08:40.657529 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Jan 14 13:08:40.662043 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 14 13:08:40.669402 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 14 13:08:40.677937 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 14 13:08:40.686943 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 14 13:08:40.690929 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 14 13:08:40.695042 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Jan 14 13:08:40.703467 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Jan 14 13:08:40.714725 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Jan 14 13:08:40.718250 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 14 13:08:40.718490 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Jan 14 13:08:40.721778 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Jan 14 13:08:40.729674 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 14 13:08:40.733261 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jan 14 13:08:40.735346 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Jan 14 13:08:40.741588 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 14 13:08:40.741788 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 14 13:08:40.749160 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 14 13:08:40.754143 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 14 13:08:40.763096 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 14 13:08:40.763627 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 14 13:08:40.780552 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jan 14 13:08:40.780906 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 14 13:08:40.785804 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Jan 14 13:08:40.800432 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 14 13:08:40.808620 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 14 13:08:40.815659 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 14 13:08:40.818845 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 14 13:08:40.819025 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jan 14 13:08:40.822052 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Jan 14 13:08:40.827477 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Jan 14 13:08:40.833707 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 14 13:08:40.842807 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 14 13:08:40.842987 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 14 13:08:40.852326 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 14 13:08:40.852515 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 14 13:08:40.858935 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 14 13:08:40.859116 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 14 13:08:40.876351 augenrules[1662]: No rules
Jan 14 13:08:40.877987 systemd[1]: audit-rules.service: Deactivated successfully.
Jan 14 13:08:40.878690 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Jan 14 13:08:40.886898 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jan 14 13:08:40.887467 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 14 13:08:40.894703 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 14 13:08:40.903190 lvm[1649]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Jan 14 13:08:40.902530 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Jan 14 13:08:40.911781 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 14 13:08:40.920450 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 14 13:08:40.926247 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 14 13:08:40.926556 systemd[1]: Reached target time-set.target - System Time Set.
Jan 14 13:08:40.930687 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jan 14 13:08:40.934180 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 14 13:08:40.934409 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 14 13:08:40.938576 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 14 13:08:40.938764 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Jan 14 13:08:40.943084 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 14 13:08:40.943265 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 14 13:08:40.947445 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 14 13:08:40.947615 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 14 13:08:40.949103 systemd-resolved[1630]: Positive Trust Anchors:
Jan 14 13:08:40.949413 systemd-resolved[1630]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Jan 14 13:08:40.949497 systemd-resolved[1630]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Jan 14 13:08:40.954663 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 14 13:08:40.955145 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 14 13:08:40.960001 systemd[1]: Finished ensure-sysext.service.
Jan 14 13:08:40.967438 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Jan 14 13:08:40.971445 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Jan 14 13:08:40.982525 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Jan 14 13:08:40.991241 lvm[1681]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Jan 14 13:08:40.998664 systemd-resolved[1630]: Using system hostname 'ci-4186.1.0-a-6f4e4149be'.
Jan 14 13:08:41.001640 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Jan 14 13:08:41.005224 systemd[1]: Reached target network.target - Network.
Jan 14 13:08:41.007744 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Jan 14 13:08:41.038375 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Jan 14 13:08:41.303427 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Jan 14 13:08:41.307304 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Jan 14 13:08:41.689764 systemd-networkd[1336]: eth0: Gained IPv6LL
Jan 14 13:08:41.693189 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Jan 14 13:08:41.697449 systemd[1]: Reached target network-online.target - Network is Online.
Jan 14 13:08:41.753575 systemd-networkd[1336]: enP57905s1: Gained IPv6LL
Jan 14 13:08:43.103844 ldconfig[1293]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Jan 14 13:08:43.117965 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Jan 14 13:08:43.124542 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Jan 14 13:08:43.137149 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Jan 14 13:08:43.140829 systemd[1]: Reached target sysinit.target - System Initialization.
Jan 14 13:08:43.144379 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Jan 14 13:08:43.147854 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Jan 14 13:08:43.151392 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Jan 14 13:08:43.154477 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Jan 14 13:08:43.159229 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Jan 14 13:08:43.162716 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Jan 14 13:08:43.162756 systemd[1]: Reached target paths.target - Path Units.
Jan 14 13:08:43.165317 systemd[1]: Reached target timers.target - Timer Units.
Jan 14 13:08:43.168958 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Jan 14 13:08:43.173563 systemd[1]: Starting docker.socket - Docker Socket for the API...
Jan 14 13:08:43.180214 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Jan 14 13:08:43.184056 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Jan 14 13:08:43.187125 systemd[1]: Reached target sockets.target - Socket Units.
Jan 14 13:08:43.189906 systemd[1]: Reached target basic.target - Basic System.
Jan 14 13:08:43.192702 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Jan 14 13:08:43.192735 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Jan 14 13:08:43.198395 systemd[1]: Starting chronyd.service - NTP client/server...
Jan 14 13:08:43.203434 systemd[1]: Starting containerd.service - containerd container runtime...
Jan 14 13:08:43.212486 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent...
Jan 14 13:08:43.223497 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Jan 14 13:08:43.232422 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Jan 14 13:08:43.245585 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Jan 14 13:08:43.250187 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Jan 14 13:08:43.250248 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy).
Jan 14 13:08:43.252349 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon.
Jan 14 13:08:43.254584 (chronyd)[1690]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS
Jan 14 13:08:43.260009 jq[1697]: false
Jan 14 13:08:43.260394 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss).
Jan 14 13:08:43.266996 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 14 13:08:43.274274 KVP[1699]: KVP starting; pid is:1699
Jan 14 13:08:43.274494 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Jan 14 13:08:43.284334 chronyd[1705]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG)
Jan 14 13:08:43.285549 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Jan 14 13:08:43.290477 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Jan 14 13:08:43.296565 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Jan 14 13:08:43.313655 systemd[1]: Starting systemd-logind.service - User Login Management...
Jan 14 13:08:43.320168 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Jan 14 13:08:43.320947 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Jan 14 13:08:43.321824 systemd[1]: Starting update-engine.service - Update Engine...
Jan 14 13:08:43.329400 chronyd[1705]: Timezone right/UTC failed leap second check, ignoring
Jan 14 13:08:43.329682 chronyd[1705]: Loaded seccomp filter (level 2)
Jan 14 13:08:43.333423 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Jan 14 13:08:43.342307 kernel: hv_utils: KVP IC version 4.0
Jan 14 13:08:43.342396 KVP[1699]: KVP LIC Version: 3.1
Jan 14 13:08:43.346705 systemd[1]: Started chronyd.service - NTP client/server.
Jan 14 13:08:43.356737 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Jan 14 13:08:43.356972 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Jan 14 13:08:43.362347 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Jan 14 13:08:43.362678 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Jan 14 13:08:43.366397 dbus-daemon[1693]: [system] SELinux support is enabled
Jan 14 13:08:43.368766 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Jan 14 13:08:43.394361 (ntainerd)[1726]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Jan 14 13:08:43.402889 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Jan 14 13:08:43.402951 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Jan 14 13:08:43.407262 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Jan 14 13:08:43.407331 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Jan 14 13:08:43.409501 update_engine[1712]: I20250114 13:08:43.409426  1712 main.cc:92] Flatcar Update Engine starting
Jan 14 13:08:43.411580 jq[1714]: true
Jan 14 13:08:43.414030 extend-filesystems[1698]: Found loop4
Jan 14 13:08:43.414030 extend-filesystems[1698]: Found loop5
Jan 14 13:08:43.414030 extend-filesystems[1698]: Found loop6
Jan 14 13:08:43.414030 extend-filesystems[1698]: Found loop7
Jan 14 13:08:43.414030 extend-filesystems[1698]: Found sda
Jan 14 13:08:43.414030 extend-filesystems[1698]: Found sda1
Jan 14 13:08:43.414030 extend-filesystems[1698]: Found sda2
Jan 14 13:08:43.443364 extend-filesystems[1698]: Found sda3
Jan 14 13:08:43.443364 extend-filesystems[1698]: Found usr
Jan 14 13:08:43.443364 extend-filesystems[1698]: Found sda4
Jan 14 13:08:43.443364 extend-filesystems[1698]: Found sda6
Jan 14 13:08:43.443364 extend-filesystems[1698]: Found sda7
Jan 14 13:08:43.443364 extend-filesystems[1698]: Found sda9
Jan 14 13:08:43.443364 extend-filesystems[1698]: Checking size of /dev/sda9
Jan 14 13:08:43.456813 update_engine[1712]: I20250114 13:08:43.425123  1712 update_check_scheduler.cc:74] Next update check in 3m4s
Jan 14 13:08:43.421861 systemd[1]: motdgen.service: Deactivated successfully.
Jan 14 13:08:43.422160 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Jan 14 13:08:43.463800 systemd[1]: Started update-engine.service - Update Engine.
Jan 14 13:08:43.471153 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Jan 14 13:08:43.491510 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Jan 14 13:08:43.504346 extend-filesystems[1698]: Old size kept for /dev/sda9
Jan 14 13:08:43.507081 extend-filesystems[1698]: Found sr0
Jan 14 13:08:43.517976 systemd[1]: extend-filesystems.service: Deactivated successfully.
Jan 14 13:08:43.518609 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Jan 14 13:08:43.527397 jq[1736]: true
Jan 14 13:08:43.572005 systemd-logind[1710]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
Jan 14 13:08:43.577511 systemd-logind[1710]: New seat seat0.
Jan 14 13:08:43.589680 coreos-metadata[1692]: Jan 14 13:08:43.589 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1
Jan 14 13:08:43.605026 coreos-metadata[1692]: Jan 14 13:08:43.595 INFO Fetch successful
Jan 14 13:08:43.605026 coreos-metadata[1692]: Jan 14 13:08:43.595 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1
Jan 14 13:08:43.605026 coreos-metadata[1692]: Jan 14 13:08:43.601 INFO Fetch successful
Jan 14 13:08:43.605026 coreos-metadata[1692]: Jan 14 13:08:43.601 INFO Fetching http://168.63.129.16/machine/0a2b7f56-b601-4c2f-b40b-96059ddc16d7/dee8dacc%2Dd1aa%2D452f%2D8cf7%2D06b738855b3d.%5Fci%2D4186.1.0%2Da%2D6f4e4149be?comp=config&type=sharedConfig&incarnation=1: Attempt #1
Jan 14 13:08:43.596322 systemd[1]: Started systemd-logind.service - User Login Management.
Jan 14 13:08:43.607412 coreos-metadata[1692]: Jan 14 13:08:43.606 INFO Fetch successful
Jan 14 13:08:43.607412 coreos-metadata[1692]: Jan 14 13:08:43.606 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1
Jan 14 13:08:43.618994 coreos-metadata[1692]: Jan 14 13:08:43.618 INFO Fetch successful
Jan 14 13:08:43.685024 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent.
Jan 14 13:08:43.695185 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Jan 14 13:08:43.718347 bash[1768]: Updated "/home/core/.ssh/authorized_keys"
Jan 14 13:08:43.722578 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Jan 14 13:08:43.728682 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met.
Jan 14 13:08:43.850449 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1764)
Jan 14 13:08:44.008622 sshd_keygen[1724]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Jan 14 13:08:44.037540 locksmithd[1744]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Jan 14 13:08:44.072477 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Jan 14 13:08:44.102914 systemd[1]: Starting issuegen.service - Generate /run/issue...
Jan 14 13:08:44.109421 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent...
Jan 14 13:08:44.119879 systemd[1]: issuegen.service: Deactivated successfully.
Jan 14 13:08:44.120683 systemd[1]: Finished issuegen.service - Generate /run/issue.
Jan 14 13:08:44.135557 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Jan 14 13:08:44.146553 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent.
Jan 14 13:08:44.191111 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Jan 14 13:08:44.202669 systemd[1]: Started getty@tty1.service - Getty on tty1.
Jan 14 13:08:44.207769 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0.
Jan 14 13:08:44.211808 systemd[1]: Reached target getty.target - Login Prompts.
Jan 14 13:08:44.549770 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 14 13:08:44.565698 (kubelet)[1867]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 14 13:08:44.734626 containerd[1726]: time="2025-01-14T13:08:44.734546100Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23
Jan 14 13:08:44.775079 containerd[1726]: time="2025-01-14T13:08:44.774245200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jan 14 13:08:44.777172 containerd[1726]: time="2025-01-14T13:08:44.776180400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jan 14 13:08:44.777172 containerd[1726]: time="2025-01-14T13:08:44.776216900Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jan 14 13:08:44.777172 containerd[1726]: time="2025-01-14T13:08:44.776239500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jan 14 13:08:44.777172 containerd[1726]: time="2025-01-14T13:08:44.776423400Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jan 14 13:08:44.777172 containerd[1726]: time="2025-01-14T13:08:44.776445700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jan 14 13:08:44.777172 containerd[1726]: time="2025-01-14T13:08:44.776513900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jan 14 13:08:44.777172 containerd[1726]: time="2025-01-14T13:08:44.776529800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jan 14 13:08:44.777172 containerd[1726]: time="2025-01-14T13:08:44.776728600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jan 14 13:08:44.777172 containerd[1726]: time="2025-01-14T13:08:44.776750700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jan 14 13:08:44.777172 containerd[1726]: time="2025-01-14T13:08:44.776769300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Jan 14 13:08:44.777172 containerd[1726]: time="2025-01-14T13:08:44.776781700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jan 14 13:08:44.777609 containerd[1726]: time="2025-01-14T13:08:44.776883000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jan 14 13:08:44.777609 containerd[1726]: time="2025-01-14T13:08:44.777131800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jan 14 13:08:44.777609 containerd[1726]: time="2025-01-14T13:08:44.777279500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jan 14 13:08:44.777609 containerd[1726]: time="2025-01-14T13:08:44.777315200Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jan 14 13:08:44.777609 containerd[1726]: time="2025-01-14T13:08:44.777422200Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jan 14 13:08:44.777609 containerd[1726]: time="2025-01-14T13:08:44.777477900Z" level=info msg="metadata content store policy set" policy=shared
Jan 14 13:08:44.795226 containerd[1726]: time="2025-01-14T13:08:44.795163400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jan 14 13:08:44.795369 containerd[1726]: time="2025-01-14T13:08:44.795251400Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jan 14 13:08:44.795369 containerd[1726]: time="2025-01-14T13:08:44.795273300Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jan 14 13:08:44.795369 containerd[1726]: time="2025-01-14T13:08:44.795305900Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jan 14 13:08:44.795369 containerd[1726]: time="2025-01-14T13:08:44.795328000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jan 14 13:08:44.795517 containerd[1726]: time="2025-01-14T13:08:44.795498000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jan 14 13:08:44.795741 containerd[1726]: time="2025-01-14T13:08:44.795715900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jan 14 13:08:44.795865 containerd[1726]: time="2025-01-14T13:08:44.795840900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jan 14 13:08:44.795923 containerd[1726]: time="2025-01-14T13:08:44.795863200Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jan 14 13:08:44.795923 containerd[1726]: time="2025-01-14T13:08:44.795883100Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jan 14 13:08:44.795923 containerd[1726]: time="2025-01-14T13:08:44.795903000Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jan 14 13:08:44.796020 containerd[1726]: time="2025-01-14T13:08:44.795926700Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jan 14 13:08:44.796020 containerd[1726]: time="2025-01-14T13:08:44.795944500Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jan 14 13:08:44.796020 containerd[1726]: time="2025-01-14T13:08:44.795965100Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jan 14 13:08:44.796020 containerd[1726]: time="2025-01-14T13:08:44.795985200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jan 14 13:08:44.796020 containerd[1726]: time="2025-01-14T13:08:44.796007200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jan 14 13:08:44.796173 containerd[1726]: time="2025-01-14T13:08:44.796025100Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jan 14 13:08:44.796173 containerd[1726]: time="2025-01-14T13:08:44.796043500Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jan 14 13:08:44.796173 containerd[1726]: time="2025-01-14T13:08:44.796071100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jan 14 13:08:44.796173 containerd[1726]: time="2025-01-14T13:08:44.796090200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jan 14 13:08:44.796173 containerd[1726]: time="2025-01-14T13:08:44.796107900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jan 14 13:08:44.796173 containerd[1726]: time="2025-01-14T13:08:44.796125700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jan 14 13:08:44.796173 containerd[1726]: time="2025-01-14T13:08:44.796143100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jan 14 13:08:44.796173 containerd[1726]: time="2025-01-14T13:08:44.796162700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jan 14 13:08:44.796469 containerd[1726]: time="2025-01-14T13:08:44.796180000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jan 14 13:08:44.796469 containerd[1726]: time="2025-01-14T13:08:44.796209900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jan 14 13:08:44.796469 containerd[1726]: time="2025-01-14T13:08:44.796231700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jan 14 13:08:44.796469 containerd[1726]: time="2025-01-14T13:08:44.796255000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jan 14 13:08:44.796469 containerd[1726]: time="2025-01-14T13:08:44.796271100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jan 14 13:08:44.796469 containerd[1726]: time="2025-01-14T13:08:44.796306800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jan 14 13:08:44.796469 containerd[1726]: time="2025-01-14T13:08:44.796325900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jan 14 13:08:44.796469 containerd[1726]: time="2025-01-14T13:08:44.796344700Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jan 14 13:08:44.796469 containerd[1726]: time="2025-01-14T13:08:44.796373700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jan 14 13:08:44.796469 containerd[1726]: time="2025-01-14T13:08:44.796392100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jan 14 13:08:44.796469 containerd[1726]: time="2025-01-14T13:08:44.796407900Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jan 14 13:08:44.796469 containerd[1726]: time="2025-01-14T13:08:44.796457200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jan 14 13:08:44.796843 containerd[1726]: time="2025-01-14T13:08:44.796481300Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Jan 14 13:08:44.796843 containerd[1726]: time="2025-01-14T13:08:44.796497100Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jan 14 13:08:44.796843 containerd[1726]: time="2025-01-14T13:08:44.796514600Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Jan 14 13:08:44.796843 containerd[1726]: time="2025-01-14T13:08:44.796527900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jan 14 13:08:44.796843 containerd[1726]: time="2025-01-14T13:08:44.796545300Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jan 14 13:08:44.796843 containerd[1726]: time="2025-01-14T13:08:44.796558900Z" level=info msg="NRI interface is disabled by configuration."
Jan 14 13:08:44.796843 containerd[1726]: time="2025-01-14T13:08:44.796572900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Jan 14 13:08:44.797068 containerd[1726]: time="2025-01-14T13:08:44.796975400Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Jan 14 13:08:44.797068 containerd[1726]: time="2025-01-14T13:08:44.797038900Z" level=info msg="Connect containerd service"
Jan 14 13:08:44.797282 containerd[1726]: time="2025-01-14T13:08:44.797098200Z" level=info msg="using legacy CRI server"
Jan 14 13:08:44.797282 containerd[1726]: time="2025-01-14T13:08:44.797109700Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Jan 14 13:08:44.797435 containerd[1726]: time="2025-01-14T13:08:44.797281900Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Jan 14 13:08:44.798229 containerd[1726]: time="2025-01-14T13:08:44.798161500Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Jan 14 13:08:44.798369 containerd[1726]: time="2025-01-14T13:08:44.798318200Z" level=info msg="Start subscribing containerd event"
Jan 14 13:08:44.798427 containerd[1726]: time="2025-01-14T13:08:44.798371700Z" level=info msg="Start recovering state"
Jan 14 13:08:44.798470 containerd[1726]: time="2025-01-14T13:08:44.798446700Z" level=info msg="Start event monitor"
Jan 14 13:08:44.798470 containerd[1726]: time="2025-01-14T13:08:44.798461000Z" level=info msg="Start snapshots syncer"
Jan 14 13:08:44.798534 containerd[1726]: time="2025-01-14T13:08:44.798472800Z" level=info msg="Start cni network conf syncer for default"
Jan 14 13:08:44.798534 containerd[1726]: time="2025-01-14T13:08:44.798484000Z" level=info msg="Start streaming server"
Jan 14 13:08:44.799218 containerd[1726]: time="2025-01-14T13:08:44.798968300Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Jan 14 13:08:44.799218 containerd[1726]: time="2025-01-14T13:08:44.799032700Z" level=info msg=serving... address=/run/containerd/containerd.sock
Jan 14 13:08:44.800586 containerd[1726]: time="2025-01-14T13:08:44.800057100Z" level=info msg="containerd successfully booted in 0.066373s"
Jan 14 13:08:44.800235 systemd[1]: Started containerd.service - containerd container runtime.
Jan 14 13:08:44.804054 systemd[1]: Reached target multi-user.target - Multi-User System.
Jan 14 13:08:44.807236 systemd[1]: Startup finished in 774ms (firmware) + 25.100s (loader) + 1.160s (kernel) + 13.384s (initrd) + 10.031s (userspace) = 50.451s.
Jan 14 13:08:44.840236 agetty[1861]: failed to open credentials directory
Jan 14 13:08:44.841049 agetty[1862]: failed to open credentials directory
Jan 14 13:08:45.063511 login[1861]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0)
Jan 14 13:08:45.065236 login[1862]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0)
Jan 14 13:08:45.078605 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Jan 14 13:08:45.085568 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Jan 14 13:08:45.096364 systemd-logind[1710]: New session 2 of user core.
Jan 14 13:08:45.102865 systemd-logind[1710]: New session 1 of user core.
Jan 14 13:08:45.109831 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Jan 14 13:08:45.122706 systemd[1]: Starting user@500.service - User Manager for UID 500...
Jan 14 13:08:45.134509 (systemd)[1885]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Jan 14 13:08:45.194109 kubelet[1867]: E0114 13:08:45.193824    1867 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 14 13:08:45.200164 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 14 13:08:45.200556 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 14 13:08:45.292981 systemd[1885]: Queued start job for default target default.target.
Jan 14 13:08:45.303133 systemd[1885]: Created slice app.slice - User Application Slice.
Jan 14 13:08:45.303327 systemd[1885]: Reached target paths.target - Paths.
Jan 14 13:08:45.303356 systemd[1885]: Reached target timers.target - Timers.
Jan 14 13:08:45.306432 systemd[1885]: Starting dbus.socket - D-Bus User Message Bus Socket...
Jan 14 13:08:45.319616 systemd[1885]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Jan 14 13:08:45.319748 systemd[1885]: Reached target sockets.target - Sockets.
Jan 14 13:08:45.319767 systemd[1885]: Reached target basic.target - Basic System.
Jan 14 13:08:45.319813 systemd[1885]: Reached target default.target - Main User Target.
Jan 14 13:08:45.319849 systemd[1885]: Startup finished in 176ms.
Jan 14 13:08:45.320113 systemd[1]: Started user@500.service - User Manager for UID 500.
Jan 14 13:08:45.324439 systemd[1]: Started session-1.scope - Session 1 of User core.
Jan 14 13:08:45.325412 systemd[1]: Started session-2.scope - Session 2 of User core.
Jan 14 13:08:45.709861 waagent[1854]: 2025-01-14T13:08:45.709687Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1
Jan 14 13:08:45.713503 waagent[1854]: 2025-01-14T13:08:45.713427Z INFO Daemon Daemon OS: flatcar 4186.1.0
Jan 14 13:08:45.716758 waagent[1854]: 2025-01-14T13:08:45.716695Z INFO Daemon Daemon Python: 3.11.10
Jan 14 13:08:45.719362 waagent[1854]: 2025-01-14T13:08:45.719278Z INFO Daemon Daemon Run daemon
Jan 14 13:08:45.721639 waagent[1854]: 2025-01-14T13:08:45.721589Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4186.1.0'
Jan 14 13:08:45.744872 waagent[1854]: 2025-01-14T13:08:45.726154Z INFO Daemon Daemon Using waagent for provisioning
Jan 14 13:08:45.744872 waagent[1854]: 2025-01-14T13:08:45.726706Z INFO Daemon Daemon Activate resource disk
Jan 14 13:08:45.744872 waagent[1854]: 2025-01-14T13:08:45.727160Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb
Jan 14 13:08:45.744872 waagent[1854]: 2025-01-14T13:08:45.733111Z INFO Daemon Daemon Found device: None
Jan 14 13:08:45.744872 waagent[1854]: 2025-01-14T13:08:45.734121Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology
Jan 14 13:08:45.744872 waagent[1854]: 2025-01-14T13:08:45.735120Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0
Jan 14 13:08:45.744872 waagent[1854]: 2025-01-14T13:08:45.736450Z INFO Daemon Daemon Clean protocol and wireserver endpoint
Jan 14 13:08:45.744872 waagent[1854]: 2025-01-14T13:08:45.737112Z INFO Daemon Daemon Running default provisioning handler
Jan 14 13:08:45.747467 waagent[1854]: 2025-01-14T13:08:45.746640Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4.
Jan 14 13:08:45.748594 waagent[1854]: 2025-01-14T13:08:45.748541Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service'
Jan 14 13:08:45.749367 waagent[1854]: 2025-01-14T13:08:45.749328Z INFO Daemon Daemon cloud-init is enabled: False
Jan 14 13:08:45.750245 waagent[1854]: 2025-01-14T13:08:45.750211Z INFO Daemon Daemon Copying ovf-env.xml
Jan 14 13:08:45.831527 waagent[1854]: 2025-01-14T13:08:45.831412Z INFO Daemon Daemon Successfully mounted dvd
Jan 14 13:08:45.845319 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully.
Jan 14 13:08:45.847227 waagent[1854]: 2025-01-14T13:08:45.847139Z INFO Daemon Daemon Detect protocol endpoint
Jan 14 13:08:45.852482 waagent[1854]: 2025-01-14T13:08:45.850457Z INFO Daemon Daemon Clean protocol and wireserver endpoint
Jan 14 13:08:45.852482 waagent[1854]: 2025-01-14T13:08:45.850728Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler
Jan 14 13:08:45.852482 waagent[1854]: 2025-01-14T13:08:45.851830Z INFO Daemon Daemon Test for route to 168.63.129.16
Jan 14 13:08:45.853158 waagent[1854]: 2025-01-14T13:08:45.853112Z INFO Daemon Daemon Route to 168.63.129.16 exists
Jan 14 13:08:45.853931 waagent[1854]: 2025-01-14T13:08:45.853892Z INFO Daemon Daemon Wire server endpoint:168.63.129.16
Jan 14 13:08:45.875341 waagent[1854]: 2025-01-14T13:08:45.875264Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05
Jan 14 13:08:45.883969 waagent[1854]: 2025-01-14T13:08:45.875883Z INFO Daemon Daemon Wire protocol version:2012-11-30
Jan 14 13:08:45.883969 waagent[1854]: 2025-01-14T13:08:45.876741Z INFO Daemon Daemon Server preferred version:2015-04-05
Jan 14 13:08:45.964967 waagent[1854]: 2025-01-14T13:08:45.964808Z INFO Daemon Daemon Initializing goal state during protocol detection
Jan 14 13:08:45.966953 waagent[1854]: 2025-01-14T13:08:45.966878Z INFO Daemon Daemon Forcing an update of the goal state.
Jan 14 13:08:45.975073 waagent[1854]: 2025-01-14T13:08:45.975010Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1]
Jan 14 13:08:45.991923 waagent[1854]: 2025-01-14T13:08:45.991857Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159
Jan 14 13:08:46.011344 waagent[1854]: 2025-01-14T13:08:45.992634Z INFO Daemon
Jan 14 13:08:46.011344 waagent[1854]: 2025-01-14T13:08:45.994097Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 40bf5dc3-e0ba-4224-a7eb-8bb5347ee80a eTag: 1456752408848499324 source: Fabric]
Jan 14 13:08:46.011344 waagent[1854]: 2025-01-14T13:08:45.995299Z INFO Daemon The vmSettings originated via Fabric; will ignore them.
Jan 14 13:08:46.011344 waagent[1854]: 2025-01-14T13:08:45.995960Z INFO Daemon
Jan 14 13:08:46.011344 waagent[1854]: 2025-01-14T13:08:45.996896Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1]
Jan 14 13:08:46.011344 waagent[1854]: 2025-01-14T13:08:46.002141Z INFO Daemon Daemon Downloading artifacts profile blob
Jan 14 13:08:46.075904 waagent[1854]: 2025-01-14T13:08:46.075809Z INFO Daemon Downloaded certificate {'thumbprint': '4EC057D59B0DBFFFFEE657E64F64B56E513BF096', 'hasPrivateKey': True}
Jan 14 13:08:46.081896 waagent[1854]: 2025-01-14T13:08:46.081820Z INFO Daemon Fetch goal state completed
Jan 14 13:08:46.091550 waagent[1854]: 2025-01-14T13:08:46.091504Z INFO Daemon Daemon Starting provisioning
Jan 14 13:08:46.099047 waagent[1854]: 2025-01-14T13:08:46.091760Z INFO Daemon Daemon Handle ovf-env.xml.
Jan 14 13:08:46.099047 waagent[1854]: 2025-01-14T13:08:46.092953Z INFO Daemon Daemon Set hostname [ci-4186.1.0-a-6f4e4149be]
Jan 14 13:08:46.130214 waagent[1854]: 2025-01-14T13:08:46.130121Z INFO Daemon Daemon Publish hostname [ci-4186.1.0-a-6f4e4149be]
Jan 14 13:08:46.139350 waagent[1854]: 2025-01-14T13:08:46.130698Z INFO Daemon Daemon Examine /proc/net/route for primary interface
Jan 14 13:08:46.139350 waagent[1854]: 2025-01-14T13:08:46.131256Z INFO Daemon Daemon Primary interface is [eth0]
Jan 14 13:08:46.155047 systemd-networkd[1336]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 14 13:08:46.155399 systemd-networkd[1336]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Jan 14 13:08:46.155452 systemd-networkd[1336]: eth0: DHCP lease lost
Jan 14 13:08:46.156474 waagent[1854]: 2025-01-14T13:08:46.156388Z INFO Daemon Daemon Create user account if not exists
Jan 14 13:08:46.174453 waagent[1854]: 2025-01-14T13:08:46.156785Z INFO Daemon Daemon User core already exists, skip useradd
Jan 14 13:08:46.174453 waagent[1854]: 2025-01-14T13:08:46.158024Z INFO Daemon Daemon Configure sudoer
Jan 14 13:08:46.174453 waagent[1854]: 2025-01-14T13:08:46.159346Z INFO Daemon Daemon Configure sshd
Jan 14 13:08:46.174453 waagent[1854]: 2025-01-14T13:08:46.160147Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive.
Jan 14 13:08:46.174453 waagent[1854]: 2025-01-14T13:08:46.160937Z INFO Daemon Daemon Deploy ssh public key.
Jan 14 13:08:46.176392 systemd-networkd[1336]: eth0: DHCPv6 lease lost
Jan 14 13:08:46.208376 systemd-networkd[1336]: eth0: DHCPv4 address 10.200.8.19/24, gateway 10.200.8.1 acquired from 168.63.129.16
Jan 14 13:08:47.294564 waagent[1854]: 2025-01-14T13:08:47.294493Z INFO Daemon Daemon Provisioning complete
Jan 14 13:08:47.308545 waagent[1854]: 2025-01-14T13:08:47.308488Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping
Jan 14 13:08:47.316058 waagent[1854]: 2025-01-14T13:08:47.308811Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions.
Jan 14 13:08:47.316058 waagent[1854]: 2025-01-14T13:08:47.309903Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent
Jan 14 13:08:47.437091 waagent[1937]: 2025-01-14T13:08:47.436976Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1)
Jan 14 13:08:47.437525 waagent[1937]: 2025-01-14T13:08:47.437148Z INFO ExtHandler ExtHandler OS: flatcar 4186.1.0
Jan 14 13:08:47.437525 waagent[1937]: 2025-01-14T13:08:47.437232Z INFO ExtHandler ExtHandler Python: 3.11.10
Jan 14 13:08:47.466750 waagent[1937]: 2025-01-14T13:08:47.466652Z INFO ExtHandler ExtHandler Distro: flatcar-4186.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1;
Jan 14 13:08:47.466970 waagent[1937]: 2025-01-14T13:08:47.466920Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file
Jan 14 13:08:47.467064 waagent[1937]: 2025-01-14T13:08:47.467020Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16
Jan 14 13:08:47.475324 waagent[1937]: 2025-01-14T13:08:47.475239Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1]
Jan 14 13:08:47.481181 waagent[1937]: 2025-01-14T13:08:47.481119Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159
Jan 14 13:08:47.481745 waagent[1937]: 2025-01-14T13:08:47.481691Z INFO ExtHandler
Jan 14 13:08:47.481829 waagent[1937]: 2025-01-14T13:08:47.481790Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: d86b106d-0a80-4a9d-873a-90202abff0ec eTag: 1456752408848499324 source: Fabric]
Jan 14 13:08:47.482138 waagent[1937]: 2025-01-14T13:08:47.482091Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them.
Jan 14 13:08:47.482721 waagent[1937]: 2025-01-14T13:08:47.482669Z INFO ExtHandler
Jan 14 13:08:47.482812 waagent[1937]: 2025-01-14T13:08:47.482756Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1]
Jan 14 13:08:47.486962 waagent[1937]: 2025-01-14T13:08:47.486911Z INFO ExtHandler ExtHandler Downloading artifacts profile blob
Jan 14 13:08:47.550042 waagent[1937]: 2025-01-14T13:08:47.549883Z INFO ExtHandler Downloaded certificate {'thumbprint': '4EC057D59B0DBFFFFEE657E64F64B56E513BF096', 'hasPrivateKey': True}
Jan 14 13:08:47.550586 waagent[1937]: 2025-01-14T13:08:47.550524Z INFO ExtHandler Fetch goal state completed
Jan 14 13:08:47.568801 waagent[1937]: 2025-01-14T13:08:47.568719Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1937
Jan 14 13:08:47.568977 waagent[1937]: 2025-01-14T13:08:47.568922Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ********
Jan 14 13:08:47.570590 waagent[1937]: 2025-01-14T13:08:47.570529Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4186.1.0', '', 'Flatcar Container Linux by Kinvolk']
Jan 14 13:08:47.570952 waagent[1937]: 2025-01-14T13:08:47.570901Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules
Jan 14 13:08:47.585217 waagent[1937]: 2025-01-14T13:08:47.585164Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service
Jan 14 13:08:47.585480 waagent[1937]: 2025-01-14T13:08:47.585428Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup
Jan 14 13:08:47.592518 waagent[1937]: 2025-01-14T13:08:47.592431Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now
Jan 14 13:08:47.600090 systemd[1]: Reloading requested from client PID 1950 ('systemctl') (unit waagent.service)...
Jan 14 13:08:47.600108 systemd[1]: Reloading...
Jan 14 13:08:47.676325 zram_generator::config[1980]: No configuration found.
Jan 14 13:08:47.817285 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 14 13:08:47.903573 systemd[1]: Reloading finished in 302 ms.
Jan 14 13:08:47.927344 waagent[1937]: 2025-01-14T13:08:47.926872Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service
Jan 14 13:08:47.936879 systemd[1]: Reloading requested from client PID 2041 ('systemctl') (unit waagent.service)...
Jan 14 13:08:47.936901 systemd[1]: Reloading...
Jan 14 13:08:48.014328 zram_generator::config[2071]: No configuration found.
Jan 14 13:08:48.146064 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 14 13:08:48.230920 systemd[1]: Reloading finished in 293 ms.
Jan 14 13:08:48.262759 waagent[1937]: 2025-01-14T13:08:48.262496Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service
Jan 14 13:08:48.262759 waagent[1937]: 2025-01-14T13:08:48.262701Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully
Jan 14 13:08:50.400231 waagent[1937]: 2025-01-14T13:08:50.400125Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up.
Jan 14 13:08:50.401059 waagent[1937]: 2025-01-14T13:08:50.400984Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True]
Jan 14 13:08:50.401984 waagent[1937]: 2025-01-14T13:08:50.401911Z INFO ExtHandler ExtHandler Starting env monitor service.
Jan 14 13:08:50.402139 waagent[1937]: 2025-01-14T13:08:50.402079Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file
Jan 14 13:08:50.402309 waagent[1937]: 2025-01-14T13:08:50.402246Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16
Jan 14 13:08:50.403066 waagent[1937]: 2025-01-14T13:08:50.403012Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service.
Jan 14 13:08:50.403439 waagent[1937]: 2025-01-14T13:08:50.403357Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled.
Jan 14 13:08:50.403766 waagent[1937]: 2025-01-14T13:08:50.403700Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file
Jan 14 13:08:50.403949 waagent[1937]: 2025-01-14T13:08:50.403907Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16
Jan 14 13:08:50.404121 waagent[1937]: 2025-01-14T13:08:50.404058Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread
Jan 14 13:08:50.404186 waagent[1937]: 2025-01-14T13:08:50.404151Z INFO ExtHandler ExtHandler Start Extension Telemetry service.
Jan 14 13:08:50.404738 waagent[1937]: 2025-01-14T13:08:50.404645Z INFO EnvHandler ExtHandler Configure routes
Jan 14 13:08:50.404846 waagent[1937]: 2025-01-14T13:08:50.404793Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True
Jan 14 13:08:50.405286 waagent[1937]: 2025-01-14T13:08:50.405179Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status.
Jan 14 13:08:50.405286 waagent[1937]: 2025-01-14T13:08:50.405238Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route:
Jan 14 13:08:50.405286 waagent[1937]: Iface        Destination        Gateway         Flags        RefCnt        Use        Metric        Mask                MTU        Window        IRTT
Jan 14 13:08:50.405286 waagent[1937]: eth0        00000000        0108C80A        0003        0        0        1024        00000000        0        0        0
Jan 14 13:08:50.405286 waagent[1937]: eth0        0008C80A        00000000        0001        0        0        1024        00FFFFFF        0        0        0
Jan 14 13:08:50.405286 waagent[1937]: eth0        0108C80A        00000000        0005        0        0        1024        FFFFFFFF        0        0        0
Jan 14 13:08:50.405286 waagent[1937]: eth0        10813FA8        0108C80A        0007        0        0        1024        FFFFFFFF        0        0        0
Jan 14 13:08:50.405286 waagent[1937]: eth0        FEA9FEA9        0108C80A        0007        0        0        1024        FFFFFFFF        0        0        0
Jan 14 13:08:50.405831 waagent[1937]: 2025-01-14T13:08:50.405784Z INFO EnvHandler ExtHandler Gateway:None
Jan 14 13:08:50.405890 waagent[1937]: 2025-01-14T13:08:50.405846Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread
Jan 14 13:08:50.406236 waagent[1937]: 2025-01-14T13:08:50.406153Z INFO EnvHandler ExtHandler Routes:None
Jan 14 13:08:50.418224 waagent[1937]: 2025-01-14T13:08:50.418174Z INFO ExtHandler ExtHandler
Jan 14 13:08:50.418357 waagent[1937]: 2025-01-14T13:08:50.418275Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: a1c2bdaa-020d-4826-952b-864909ba9a62 correlation 40b5357e-0835-41ee-aae8-e3756bf129f2 created: 2025-01-14T13:07:42.648521Z]
Jan 14 13:08:50.418739 waagent[1937]: 2025-01-14T13:08:50.418685Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything.
Jan 14 13:08:50.419236 waagent[1937]: 2025-01-14T13:08:50.419187Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms]
Jan 14 13:08:50.464233 waagent[1937]: 2025-01-14T13:08:50.464077Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 5290CB3D-2A25-4E72-95AF-45492BF208BA;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0]
Jan 14 13:08:50.473628 waagent[1937]: 2025-01-14T13:08:50.473557Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules:
Jan 14 13:08:50.473628 waagent[1937]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
Jan 14 13:08:50.473628 waagent[1937]:     pkts      bytes target     prot opt in     out     source               destination
Jan 14 13:08:50.473628 waagent[1937]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
Jan 14 13:08:50.473628 waagent[1937]:     pkts      bytes target     prot opt in     out     source               destination
Jan 14 13:08:50.473628 waagent[1937]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
Jan 14 13:08:50.473628 waagent[1937]:     pkts      bytes target     prot opt in     out     source               destination
Jan 14 13:08:50.473628 waagent[1937]:        0        0 ACCEPT     tcp  --  *      *       0.0.0.0/0            168.63.129.16        tcp dpt:53
Jan 14 13:08:50.473628 waagent[1937]:        0        0 ACCEPT     tcp  --  *      *       0.0.0.0/0            168.63.129.16        owner UID match 0
Jan 14 13:08:50.473628 waagent[1937]:        0        0 DROP       tcp  --  *      *       0.0.0.0/0            168.63.129.16        ctstate INVALID,NEW
Jan 14 13:08:50.476868 waagent[1937]: 2025-01-14T13:08:50.476808Z INFO EnvHandler ExtHandler Current Firewall rules:
Jan 14 13:08:50.476868 waagent[1937]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
Jan 14 13:08:50.476868 waagent[1937]:     pkts      bytes target     prot opt in     out     source               destination
Jan 14 13:08:50.476868 waagent[1937]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
Jan 14 13:08:50.476868 waagent[1937]:     pkts      bytes target     prot opt in     out     source               destination
Jan 14 13:08:50.476868 waagent[1937]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
Jan 14 13:08:50.476868 waagent[1937]:     pkts      bytes target     prot opt in     out     source               destination
Jan 14 13:08:50.476868 waagent[1937]:        0        0 ACCEPT     tcp  --  *      *       0.0.0.0/0            168.63.129.16        tcp dpt:53
Jan 14 13:08:50.476868 waagent[1937]:        0        0 ACCEPT     tcp  --  *      *       0.0.0.0/0            168.63.129.16        owner UID match 0
Jan 14 13:08:50.476868 waagent[1937]:        0        0 DROP       tcp  --  *      *       0.0.0.0/0            168.63.129.16        ctstate INVALID,NEW
Jan 14 13:08:50.477237 waagent[1937]: 2025-01-14T13:08:50.477106Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300
Jan 14 13:08:50.506937 waagent[1937]: 2025-01-14T13:08:50.506863Z INFO MonitorHandler ExtHandler Network interfaces:
Jan 14 13:08:50.506937 waagent[1937]: Executing ['ip', '-a', '-o', 'link']:
Jan 14 13:08:50.506937 waagent[1937]: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
Jan 14 13:08:50.506937 waagent[1937]: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\    link/ether 00:0d:3a:b1:8c:ea brd ff:ff:ff:ff:ff:ff
Jan 14 13:08:50.506937 waagent[1937]: 3: enP57905s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\    link/ether 00:0d:3a:b1:8c:ea brd ff:ff:ff:ff:ff:ff\    altname enP57905p0s2
Jan 14 13:08:50.506937 waagent[1937]: Executing ['ip', '-4', '-a', '-o', 'address']:
Jan 14 13:08:50.506937 waagent[1937]: 1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
Jan 14 13:08:50.506937 waagent[1937]: 2: eth0    inet 10.200.8.19/24 metric 1024 brd 10.200.8.255 scope global eth0\       valid_lft forever preferred_lft forever
Jan 14 13:08:50.506937 waagent[1937]: Executing ['ip', '-6', '-a', '-o', 'address']:
Jan 14 13:08:50.506937 waagent[1937]: 1: lo    inet6 ::1/128 scope host noprefixroute \       valid_lft forever preferred_lft forever
Jan 14 13:08:50.506937 waagent[1937]: 2: eth0    inet6 fe80::20d:3aff:feb1:8cea/64 scope link proto kernel_ll \       valid_lft forever preferred_lft forever
Jan 14 13:08:50.506937 waagent[1937]: 3: enP57905s1    inet6 fe80::20d:3aff:feb1:8cea/64 scope link proto kernel_ll \       valid_lft forever preferred_lft forever
Jan 14 13:08:55.254779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Jan 14 13:08:55.260604 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 14 13:08:55.354257 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 14 13:08:55.363615 (kubelet)[2171]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 14 13:08:58.137521 kubelet[2171]: E0114 13:08:55.905959    2171 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 14 13:08:55.909605 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 14 13:08:55.909740 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 14 13:09:06.004880 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Jan 14 13:09:06.010520 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 14 13:09:06.450673 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 14 13:09:06.455623 (kubelet)[2187]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 14 13:09:06.677699 kubelet[2187]: E0114 13:09:06.677647    2187 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 14 13:09:06.680343 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 14 13:09:06.680530 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 14 13:09:07.122242 chronyd[1705]: Selected source PHC0
Jan 14 13:09:16.754783 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
Jan 14 13:09:16.761546 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 14 13:09:17.089526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 14 13:09:17.099643 (kubelet)[2203]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 14 13:09:17.404005 kubelet[2203]: E0114 13:09:17.403821    2203 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 14 13:09:17.406284 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 14 13:09:17.406499 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 14 13:09:20.664478 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Jan 14 13:09:20.675664 systemd[1]: Started sshd@0-10.200.8.19:22-10.200.16.10:50412.service - OpenSSH per-connection server daemon (10.200.16.10:50412).
Jan 14 13:09:21.404793 sshd[2212]: Accepted publickey for core from 10.200.16.10 port 50412 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU
Jan 14 13:09:21.406526 sshd-session[2212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 14 13:09:21.411341 systemd-logind[1710]: New session 3 of user core.
Jan 14 13:09:21.417492 systemd[1]: Started session-3.scope - Session 3 of User core.
Jan 14 13:09:21.967347 systemd[1]: Started sshd@1-10.200.8.19:22-10.200.16.10:50418.service - OpenSSH per-connection server daemon (10.200.16.10:50418).
Jan 14 13:09:22.608893 sshd[2217]: Accepted publickey for core from 10.200.16.10 port 50418 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU
Jan 14 13:09:22.610627 sshd-session[2217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 14 13:09:22.615375 systemd-logind[1710]: New session 4 of user core.
Jan 14 13:09:22.626504 systemd[1]: Started session-4.scope - Session 4 of User core.
Jan 14 13:09:23.063046 sshd[2219]: Connection closed by 10.200.16.10 port 50418
Jan 14 13:09:23.063994 sshd-session[2217]: pam_unix(sshd:session): session closed for user core
Jan 14 13:09:23.067027 systemd[1]: sshd@1-10.200.8.19:22-10.200.16.10:50418.service: Deactivated successfully.
Jan 14 13:09:23.069215 systemd[1]: session-4.scope: Deactivated successfully.
Jan 14 13:09:23.070990 systemd-logind[1710]: Session 4 logged out. Waiting for processes to exit.
Jan 14 13:09:23.072204 systemd-logind[1710]: Removed session 4.
Jan 14 13:09:23.179425 systemd[1]: Started sshd@2-10.200.8.19:22-10.200.16.10:50426.service - OpenSSH per-connection server daemon (10.200.16.10:50426).
Jan 14 13:09:23.822750 sshd[2224]: Accepted publickey for core from 10.200.16.10 port 50426 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU
Jan 14 13:09:23.824460 sshd-session[2224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 14 13:09:23.830375 systemd-logind[1710]: New session 5 of user core.
Jan 14 13:09:23.836468 systemd[1]: Started session-5.scope - Session 5 of User core.
Jan 14 13:09:24.270547 sshd[2226]: Connection closed by 10.200.16.10 port 50426
Jan 14 13:09:24.271466 sshd-session[2224]: pam_unix(sshd:session): session closed for user core
Jan 14 13:09:24.274821 systemd[1]: sshd@2-10.200.8.19:22-10.200.16.10:50426.service: Deactivated successfully.
Jan 14 13:09:24.277087 systemd[1]: session-5.scope: Deactivated successfully.
Jan 14 13:09:24.279112 systemd-logind[1710]: Session 5 logged out. Waiting for processes to exit.
Jan 14 13:09:24.280214 systemd-logind[1710]: Removed session 5.
Jan 14 13:09:24.383805 systemd[1]: Started sshd@3-10.200.8.19:22-10.200.16.10:50438.service - OpenSSH per-connection server daemon (10.200.16.10:50438).
Jan 14 13:09:25.027261 sshd[2231]: Accepted publickey for core from 10.200.16.10 port 50438 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU
Jan 14 13:09:25.028921 sshd-session[2231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 14 13:09:25.034372 systemd-logind[1710]: New session 6 of user core.
Jan 14 13:09:25.043479 systemd[1]: Started session-6.scope - Session 6 of User core.
Jan 14 13:09:25.650158 sshd[2233]: Connection closed by 10.200.16.10 port 50438
Jan 14 13:09:25.651030 sshd-session[2231]: pam_unix(sshd:session): session closed for user core
Jan 14 13:09:25.655320 systemd[1]: sshd@3-10.200.8.19:22-10.200.16.10:50438.service: Deactivated successfully.
Jan 14 13:09:25.657241 systemd[1]: session-6.scope: Deactivated successfully.
Jan 14 13:09:25.657974 systemd-logind[1710]: Session 6 logged out. Waiting for processes to exit.
Jan 14 13:09:25.658897 systemd-logind[1710]: Removed session 6.
Jan 14 13:09:25.762170 systemd[1]: Started sshd@4-10.200.8.19:22-10.200.16.10:50444.service - OpenSSH per-connection server daemon (10.200.16.10:50444).
Jan 14 13:09:26.415897 sshd[2238]: Accepted publickey for core from 10.200.16.10 port 50444 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU
Jan 14 13:09:26.417355 sshd-session[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 14 13:09:26.421925 systemd-logind[1710]: New session 7 of user core.
Jan 14 13:09:26.428457 systemd[1]: Started session-7.scope - Session 7 of User core.
Jan 14 13:09:26.869921 sudo[2241]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Jan 14 13:09:26.870283 sudo[2241]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 14 13:09:26.893824 sudo[2241]: pam_unix(sudo:session): session closed for user root
Jan 14 13:09:26.998808 sshd[2240]: Connection closed by 10.200.16.10 port 50444
Jan 14 13:09:26.999998 sshd-session[2238]: pam_unix(sshd:session): session closed for user core
Jan 14 13:09:27.003046 systemd[1]: sshd@4-10.200.8.19:22-10.200.16.10:50444.service: Deactivated successfully.
Jan 14 13:09:27.005086 systemd[1]: session-7.scope: Deactivated successfully.
Jan 14 13:09:27.006659 systemd-logind[1710]: Session 7 logged out. Waiting for processes to exit.
Jan 14 13:09:27.007773 systemd-logind[1710]: Removed session 7.
Jan 14 13:09:27.114269 systemd[1]: Started sshd@5-10.200.8.19:22-10.200.16.10:54116.service - OpenSSH per-connection server daemon (10.200.16.10:54116).
Jan 14 13:09:27.504747 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4.
Jan 14 13:09:27.511545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 14 13:09:27.604895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 14 13:09:27.609540 (kubelet)[2256]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 14 13:09:27.652153 kubelet[2256]: E0114 13:09:27.652103    2256 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 14 13:09:27.654867 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 14 13:09:27.655048 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 14 13:09:27.755018 kernel: hv_balloon: Max. dynamic memory size: 8192 MB
Jan 14 13:09:27.757185 sshd[2246]: Accepted publickey for core from 10.200.16.10 port 54116 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU
Jan 14 13:09:27.758846 sshd-session[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 14 13:09:27.764087 systemd-logind[1710]: New session 8 of user core.
Jan 14 13:09:27.771448 systemd[1]: Started session-8.scope - Session 8 of User core.
Jan 14 13:09:28.110917 sudo[2266]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Jan 14 13:09:28.111317 sudo[2266]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 14 13:09:28.115037 sudo[2266]: pam_unix(sudo:session): session closed for user root
Jan 14 13:09:28.120015 sudo[2265]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules
Jan 14 13:09:28.120384 sudo[2265]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 14 13:09:28.141715 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Jan 14 13:09:28.168110 augenrules[2288]: No rules
Jan 14 13:09:28.169547 systemd[1]: audit-rules.service: Deactivated successfully.
Jan 14 13:09:28.169775 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Jan 14 13:09:28.171015 sudo[2265]: pam_unix(sudo:session): session closed for user root
Jan 14 13:09:28.231351 update_engine[1712]: I20250114 13:09:28.231251  1712 update_attempter.cc:509] Updating boot flags...
Jan 14 13:09:28.274808 sshd[2264]: Connection closed by 10.200.16.10 port 54116
Jan 14 13:09:28.278378 sshd-session[2246]: pam_unix(sshd:session): session closed for user core
Jan 14 13:09:28.284949 systemd[1]: sshd@5-10.200.8.19:22-10.200.16.10:54116.service: Deactivated successfully.
Jan 14 13:09:28.286866 systemd[1]: session-8.scope: Deactivated successfully.
Jan 14 13:09:28.287918 systemd-logind[1710]: Session 8 logged out. Waiting for processes to exit.
Jan 14 13:09:28.289358 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (2308)
Jan 14 13:09:28.292998 systemd-logind[1710]: Removed session 8.
Jan 14 13:09:28.413909 systemd[1]: Started sshd@6-10.200.8.19:22-10.200.16.10:54128.service - OpenSSH per-connection server daemon (10.200.16.10:54128).
Jan 14 13:09:28.452394 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (2313)
Jan 14 13:09:29.071862 sshd[2361]: Accepted publickey for core from 10.200.16.10 port 54128 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU
Jan 14 13:09:29.073507 sshd-session[2361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 14 13:09:29.079065 systemd-logind[1710]: New session 9 of user core.
Jan 14 13:09:29.085462 systemd[1]: Started session-9.scope - Session 9 of User core.
Jan 14 13:09:29.422926 sudo[2413]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Jan 14 13:09:29.423391 sudo[2413]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 14 13:09:30.471670 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 14 13:09:30.477580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 14 13:09:30.506927 systemd[1]: Reloading requested from client PID 2450 ('systemctl') (unit session-9.scope)...
Jan 14 13:09:30.506964 systemd[1]: Reloading...
Jan 14 13:09:30.644398 zram_generator::config[2492]: No configuration found.
Jan 14 13:09:30.762362 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 14 13:09:30.845609 systemd[1]: Reloading finished in 338 ms.
Jan 14 13:09:31.112266 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM
Jan 14 13:09:31.112441 systemd[1]: kubelet.service: Failed with result 'signal'.
Jan 14 13:09:31.112798 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 14 13:09:31.119750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 14 13:09:31.228156 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 14 13:09:31.238667 (kubelet)[2558]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Jan 14 13:09:31.779345 kubelet[2558]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 14 13:09:31.779345 kubelet[2558]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Jan 14 13:09:31.779345 kubelet[2558]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 14 13:09:31.779345 kubelet[2558]: I0114 13:09:31.778603    2558 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Jan 14 13:09:32.291307 kubelet[2558]: I0114 13:09:32.291253    2558 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
Jan 14 13:09:32.291307 kubelet[2558]: I0114 13:09:32.291284    2558 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Jan 14 13:09:32.291588 kubelet[2558]: I0114 13:09:32.291565    2558 server.go:927] "Client rotation is on, will bootstrap in background"
Jan 14 13:09:32.308974 kubelet[2558]: I0114 13:09:32.308654    2558 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Jan 14 13:09:32.327281 kubelet[2558]: I0114 13:09:32.327245    2558 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Jan 14 13:09:32.327614 kubelet[2558]: I0114 13:09:32.327536    2558 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Jan 14 13:09:32.327793 kubelet[2558]: I0114 13:09:32.327588    2558 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.200.8.19","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Jan 14 13:09:32.328519 kubelet[2558]: I0114 13:09:32.328496    2558 topology_manager.go:138] "Creating topology manager with none policy"
Jan 14 13:09:32.328645 kubelet[2558]: I0114 13:09:32.328524    2558 container_manager_linux.go:301] "Creating device plugin manager"
Jan 14 13:09:32.328723 kubelet[2558]: I0114 13:09:32.328703    2558 state_mem.go:36] "Initialized new in-memory state store"
Jan 14 13:09:32.329537 kubelet[2558]: I0114 13:09:32.329520    2558 kubelet.go:400] "Attempting to sync node with API server"
Jan 14 13:09:32.329618 kubelet[2558]: I0114 13:09:32.329541    2558 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Jan 14 13:09:32.329618 kubelet[2558]: I0114 13:09:32.329571    2558 kubelet.go:312] "Adding apiserver pod source"
Jan 14 13:09:32.329618 kubelet[2558]: I0114 13:09:32.329594    2558 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Jan 14 13:09:32.330072 kubelet[2558]: E0114 13:09:32.330040    2558 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:09:32.331395 kubelet[2558]: E0114 13:09:32.330998    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:09:32.334271 kubelet[2558]: I0114 13:09:32.334030    2558 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1"
Jan 14 13:09:32.335533 kubelet[2558]: I0114 13:09:32.335513    2558 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Jan 14 13:09:32.335616 kubelet[2558]: W0114 13:09:32.335590    2558 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Jan 14 13:09:32.336322 kubelet[2558]: I0114 13:09:32.336222    2558 server.go:1264] "Started kubelet"
Jan 14 13:09:32.336729 kubelet[2558]: I0114 13:09:32.336691    2558 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Jan 14 13:09:32.338531 kubelet[2558]: I0114 13:09:32.337892    2558 server.go:455] "Adding debug handlers to kubelet server"
Jan 14 13:09:32.340597 kubelet[2558]: I0114 13:09:32.340529    2558 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Jan 14 13:09:32.340818 kubelet[2558]: I0114 13:09:32.340796    2558 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Jan 14 13:09:32.341555 kubelet[2558]: W0114 13:09:32.341526    2558 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Jan 14 13:09:32.341618 kubelet[2558]: E0114 13:09:32.341560    2558 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Jan 14 13:09:32.341715 kubelet[2558]: W0114 13:09:32.341693    2558 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.200.8.19" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Jan 14 13:09:32.341832 kubelet[2558]: E0114 13:09:32.341716    2558 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.19" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Jan 14 13:09:32.344178 kubelet[2558]: I0114 13:09:32.344150    2558 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Jan 14 13:09:32.349140 kubelet[2558]: E0114 13:09:32.348595    2558 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.19\" not found"
Jan 14 13:09:32.349140 kubelet[2558]: I0114 13:09:32.348657    2558 volume_manager.go:291] "Starting Kubelet Volume Manager"
Jan 14 13:09:32.349140 kubelet[2558]: I0114 13:09:32.348759    2558 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
Jan 14 13:09:32.349140 kubelet[2558]: I0114 13:09:32.348806    2558 reconciler.go:26] "Reconciler: start to sync state"
Jan 14 13:09:32.349501 kubelet[2558]: E0114 13:09:32.349477    2558 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Jan 14 13:09:32.352217 kubelet[2558]: I0114 13:09:32.352194    2558 factory.go:221] Registration of the systemd container factory successfully
Jan 14 13:09:32.352385 kubelet[2558]: I0114 13:09:32.352315    2558 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Jan 14 13:09:32.355677 kubelet[2558]: I0114 13:09:32.355657    2558 factory.go:221] Registration of the containerd container factory successfully
Jan 14 13:09:32.362234 kubelet[2558]: W0114 13:09:32.362196    2558 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Jan 14 13:09:32.362234 kubelet[2558]: E0114 13:09:32.362232    2558 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Jan 14 13:09:32.362485 kubelet[2558]: E0114 13:09:32.362447    2558 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.200.8.19\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms"
Jan 14 13:09:32.362650 kubelet[2558]: E0114 13:09:32.362535    2558 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.200.8.19.181a911fdc9f6595  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.200.8.19,UID:10.200.8.19,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.200.8.19,},FirstTimestamp:2025-01-14 13:09:32.336194965 +0000 UTC m=+1.094130482,LastTimestamp:2025-01-14 13:09:32.336194965 +0000 UTC m=+1.094130482,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.200.8.19,}"
Jan 14 13:09:32.376225 kubelet[2558]: I0114 13:09:32.376059    2558 cpu_manager.go:214] "Starting CPU manager" policy="none"
Jan 14 13:09:32.376225 kubelet[2558]: I0114 13:09:32.376078    2558 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Jan 14 13:09:32.376225 kubelet[2558]: I0114 13:09:32.376095    2558 state_mem.go:36] "Initialized new in-memory state store"
Jan 14 13:09:32.384821 kubelet[2558]: I0114 13:09:32.384218    2558 policy_none.go:49] "None policy: Start"
Jan 14 13:09:32.385718 kubelet[2558]: I0114 13:09:32.385359    2558 memory_manager.go:170] "Starting memorymanager" policy="None"
Jan 14 13:09:32.385718 kubelet[2558]: I0114 13:09:32.385385    2558 state_mem.go:35] "Initializing new in-memory state store"
Jan 14 13:09:32.396080 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice.
Jan 14 13:09:32.403995 kubelet[2558]: I0114 13:09:32.403729    2558 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Jan 14 13:09:32.405738 kubelet[2558]: I0114 13:09:32.405609    2558 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Jan 14 13:09:32.406218 kubelet[2558]: I0114 13:09:32.405897    2558 status_manager.go:217] "Starting to sync pod status with apiserver"
Jan 14 13:09:32.406218 kubelet[2558]: I0114 13:09:32.405927    2558 kubelet.go:2337] "Starting kubelet main sync loop"
Jan 14 13:09:32.406218 kubelet[2558]: E0114 13:09:32.405973    2558 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Jan 14 13:09:32.414071 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice.
Jan 14 13:09:32.421240 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice.
Jan 14 13:09:32.422940 kubelet[2558]: W0114 13:09:32.422903    2558 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/cpuset.cpus.effective: no such device
Jan 14 13:09:32.432146 kubelet[2558]: I0114 13:09:32.432104    2558 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Jan 14 13:09:32.432414 kubelet[2558]: I0114 13:09:32.432370    2558 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Jan 14 13:09:32.432532 kubelet[2558]: I0114 13:09:32.432520    2558 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Jan 14 13:09:32.435532 kubelet[2558]: E0114 13:09:32.435490    2558 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.8.19\" not found"
Jan 14 13:09:32.450314 kubelet[2558]: I0114 13:09:32.450165    2558 kubelet_node_status.go:73] "Attempting to register node" node="10.200.8.19"
Jan 14 13:09:32.457076 kubelet[2558]: I0114 13:09:32.457035    2558 kubelet_node_status.go:76] "Successfully registered node" node="10.200.8.19"
Jan 14 13:09:32.471252 kubelet[2558]: E0114 13:09:32.471209    2558 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.19\" not found"
Jan 14 13:09:32.572007 kubelet[2558]: E0114 13:09:32.571855    2558 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.19\" not found"
Jan 14 13:09:32.672353 kubelet[2558]: E0114 13:09:32.672265    2558 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.19\" not found"
Jan 14 13:09:32.753655 sudo[2413]: pam_unix(sudo:session): session closed for user root
Jan 14 13:09:32.772984 kubelet[2558]: E0114 13:09:32.772934    2558 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.19\" not found"
Jan 14 13:09:32.856085 sshd[2412]: Connection closed by 10.200.16.10 port 54128
Jan 14 13:09:32.857216 sshd-session[2361]: pam_unix(sshd:session): session closed for user core
Jan 14 13:09:32.860736 systemd[1]: sshd@6-10.200.8.19:22-10.200.16.10:54128.service: Deactivated successfully.
Jan 14 13:09:32.863101 systemd[1]: session-9.scope: Deactivated successfully.
Jan 14 13:09:32.864718 systemd-logind[1710]: Session 9 logged out. Waiting for processes to exit.
Jan 14 13:09:32.865873 systemd-logind[1710]: Removed session 9.
Jan 14 13:09:32.874046 kubelet[2558]: E0114 13:09:32.874008    2558 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.19\" not found"
Jan 14 13:09:32.974876 kubelet[2558]: E0114 13:09:32.974818    2558 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.19\" not found"
Jan 14 13:09:33.075748 kubelet[2558]: E0114 13:09:33.075688    2558 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.19\" not found"
Jan 14 13:09:33.176600 kubelet[2558]: E0114 13:09:33.176441    2558 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.19\" not found"
Jan 14 13:09:33.277393 kubelet[2558]: E0114 13:09:33.277328    2558 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.19\" not found"
Jan 14 13:09:33.293657 kubelet[2558]: I0114 13:09:33.293593    2558 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials"
Jan 14 13:09:33.293893 kubelet[2558]: W0114 13:09:33.293866    2558 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received
Jan 14 13:09:33.332214 kubelet[2558]: E0114 13:09:33.332151    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:09:33.378445 kubelet[2558]: E0114 13:09:33.378384    2558 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.19\" not found"
Jan 14 13:09:33.478823 kubelet[2558]: E0114 13:09:33.478654    2558 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.19\" not found"
Jan 14 13:09:33.580198 kubelet[2558]: I0114 13:09:33.580159    2558 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24"
Jan 14 13:09:33.580610 containerd[1726]: time="2025-01-14T13:09:33.580554760Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Jan 14 13:09:33.581149 kubelet[2558]: I0114 13:09:33.580807    2558 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24"
Jan 14 13:09:34.333113 kubelet[2558]: I0114 13:09:34.333049    2558 apiserver.go:52] "Watching apiserver"
Jan 14 13:09:34.333759 kubelet[2558]: E0114 13:09:34.333045    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:09:34.337276 kubelet[2558]: I0114 13:09:34.337215    2558 topology_manager.go:215] "Topology Admit Handler" podUID="ef087c79-d22f-4fd3-9082-757761a5c25b" podNamespace="calico-system" podName="calico-node-6bgmt"
Jan 14 13:09:34.338314 kubelet[2558]: I0114 13:09:34.337353    2558 topology_manager.go:215] "Topology Admit Handler" podUID="7f807593-f91e-4011-a174-603d407a7151" podNamespace="calico-system" podName="csi-node-driver-cjcc6"
Jan 14 13:09:34.338314 kubelet[2558]: I0114 13:09:34.337439    2558 topology_manager.go:215] "Topology Admit Handler" podUID="1cbcada8-a75b-4918-a688-739c85547347" podNamespace="kube-system" podName="kube-proxy-7cbm7"
Jan 14 13:09:34.338314 kubelet[2558]: E0114 13:09:34.337600    2558 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cjcc6" podUID="7f807593-f91e-4011-a174-603d407a7151"
Jan 14 13:09:34.345642 systemd[1]: Created slice kubepods-besteffort-pod1cbcada8_a75b_4918_a688_739c85547347.slice - libcontainer container kubepods-besteffort-pod1cbcada8_a75b_4918_a688_739c85547347.slice.
Jan 14 13:09:34.350611 kubelet[2558]: I0114 13:09:34.350368    2558 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
Jan 14 13:09:34.355963 systemd[1]: Created slice kubepods-besteffort-podef087c79_d22f_4fd3_9082_757761a5c25b.slice - libcontainer container kubepods-besteffort-podef087c79_d22f_4fd3_9082_757761a5c25b.slice.
Jan 14 13:09:34.360015 kubelet[2558]: I0114 13:09:34.359967    2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ef087c79-d22f-4fd3-9082-757761a5c25b-policysync\") pod \"calico-node-6bgmt\" (UID: \"ef087c79-d22f-4fd3-9082-757761a5c25b\") " pod="calico-system/calico-node-6bgmt"
Jan 14 13:09:34.360015 kubelet[2558]: I0114 13:09:34.360006    2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef087c79-d22f-4fd3-9082-757761a5c25b-tigera-ca-bundle\") pod \"calico-node-6bgmt\" (UID: \"ef087c79-d22f-4fd3-9082-757761a5c25b\") " pod="calico-system/calico-node-6bgmt"
Jan 14 13:09:34.360373 kubelet[2558]: I0114 13:09:34.360030    2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1cbcada8-a75b-4918-a688-739c85547347-xtables-lock\") pod \"kube-proxy-7cbm7\" (UID: \"1cbcada8-a75b-4918-a688-739c85547347\") " pod="kube-system/kube-proxy-7cbm7"
Jan 14 13:09:34.360373 kubelet[2558]: I0114 13:09:34.360050    2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1cbcada8-a75b-4918-a688-739c85547347-lib-modules\") pod \"kube-proxy-7cbm7\" (UID: \"1cbcada8-a75b-4918-a688-739c85547347\") " pod="kube-system/kube-proxy-7cbm7"
Jan 14 13:09:34.360373 kubelet[2558]: I0114 13:09:34.360071    2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef087c79-d22f-4fd3-9082-757761a5c25b-xtables-lock\") pod \"calico-node-6bgmt\" (UID: \"ef087c79-d22f-4fd3-9082-757761a5c25b\") " pod="calico-system/calico-node-6bgmt"
Jan 14 13:09:34.360373 kubelet[2558]: I0114 13:09:34.360092    2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ef087c79-d22f-4fd3-9082-757761a5c25b-cni-net-dir\") pod \"calico-node-6bgmt\" (UID: \"ef087c79-d22f-4fd3-9082-757761a5c25b\") " pod="calico-system/calico-node-6bgmt"
Jan 14 13:09:34.360373 kubelet[2558]: I0114 13:09:34.360132    2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ef087c79-d22f-4fd3-9082-757761a5c25b-flexvol-driver-host\") pod \"calico-node-6bgmt\" (UID: \"ef087c79-d22f-4fd3-9082-757761a5c25b\") " pod="calico-system/calico-node-6bgmt"
Jan 14 13:09:34.360572 kubelet[2558]: I0114 13:09:34.360158    2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7f807593-f91e-4011-a174-603d407a7151-varrun\") pod \"csi-node-driver-cjcc6\" (UID: \"7f807593-f91e-4011-a174-603d407a7151\") " pod="calico-system/csi-node-driver-cjcc6"
Jan 14 13:09:34.360572 kubelet[2558]: I0114 13:09:34.360178    2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f807593-f91e-4011-a174-603d407a7151-kubelet-dir\") pod \"csi-node-driver-cjcc6\" (UID: \"7f807593-f91e-4011-a174-603d407a7151\") " pod="calico-system/csi-node-driver-cjcc6"
Jan 14 13:09:34.360572 kubelet[2558]: I0114 13:09:34.360198    2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ef087c79-d22f-4fd3-9082-757761a5c25b-var-run-calico\") pod \"calico-node-6bgmt\" (UID: \"ef087c79-d22f-4fd3-9082-757761a5c25b\") " pod="calico-system/calico-node-6bgmt"
Jan 14 13:09:34.360572 kubelet[2558]: I0114 13:09:34.360218    2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ef087c79-d22f-4fd3-9082-757761a5c25b-cni-log-dir\") pod \"calico-node-6bgmt\" (UID: \"ef087c79-d22f-4fd3-9082-757761a5c25b\") " pod="calico-system/calico-node-6bgmt"
Jan 14 13:09:34.360572 kubelet[2558]: I0114 13:09:34.360241    2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mknqv\" (UniqueName: \"kubernetes.io/projected/ef087c79-d22f-4fd3-9082-757761a5c25b-kube-api-access-mknqv\") pod \"calico-node-6bgmt\" (UID: \"ef087c79-d22f-4fd3-9082-757761a5c25b\") " pod="calico-system/calico-node-6bgmt"
Jan 14 13:09:34.360771 kubelet[2558]: I0114 13:09:34.360265    2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zscb4\" (UniqueName: \"kubernetes.io/projected/7f807593-f91e-4011-a174-603d407a7151-kube-api-access-zscb4\") pod \"csi-node-driver-cjcc6\" (UID: \"7f807593-f91e-4011-a174-603d407a7151\") " pod="calico-system/csi-node-driver-cjcc6"
Jan 14 13:09:34.360771 kubelet[2558]: I0114 13:09:34.360328    2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsnsl\" (UniqueName: \"kubernetes.io/projected/1cbcada8-a75b-4918-a688-739c85547347-kube-api-access-zsnsl\") pod \"kube-proxy-7cbm7\" (UID: \"1cbcada8-a75b-4918-a688-739c85547347\") " pod="kube-system/kube-proxy-7cbm7"
Jan 14 13:09:34.360771 kubelet[2558]: I0114 13:09:34.360354    2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ef087c79-d22f-4fd3-9082-757761a5c25b-cni-bin-dir\") pod \"calico-node-6bgmt\" (UID: \"ef087c79-d22f-4fd3-9082-757761a5c25b\") " pod="calico-system/calico-node-6bgmt"
Jan 14 13:09:34.360771 kubelet[2558]: I0114 13:09:34.360377    2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ef087c79-d22f-4fd3-9082-757761a5c25b-node-certs\") pod \"calico-node-6bgmt\" (UID: \"ef087c79-d22f-4fd3-9082-757761a5c25b\") " pod="calico-system/calico-node-6bgmt"
Jan 14 13:09:34.360771 kubelet[2558]: I0114 13:09:34.360412    2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ef087c79-d22f-4fd3-9082-757761a5c25b-var-lib-calico\") pod \"calico-node-6bgmt\" (UID: \"ef087c79-d22f-4fd3-9082-757761a5c25b\") " pod="calico-system/calico-node-6bgmt"
Jan 14 13:09:34.360883 kubelet[2558]: I0114 13:09:34.360438    2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7f807593-f91e-4011-a174-603d407a7151-socket-dir\") pod \"csi-node-driver-cjcc6\" (UID: \"7f807593-f91e-4011-a174-603d407a7151\") " pod="calico-system/csi-node-driver-cjcc6"
Jan 14 13:09:34.360883 kubelet[2558]: I0114 13:09:34.360462    2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7f807593-f91e-4011-a174-603d407a7151-registration-dir\") pod \"csi-node-driver-cjcc6\" (UID: \"7f807593-f91e-4011-a174-603d407a7151\") " pod="calico-system/csi-node-driver-cjcc6"
Jan 14 13:09:34.360883 kubelet[2558]: I0114 13:09:34.360504    2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1cbcada8-a75b-4918-a688-739c85547347-kube-proxy\") pod \"kube-proxy-7cbm7\" (UID: \"1cbcada8-a75b-4918-a688-739c85547347\") " pod="kube-system/kube-proxy-7cbm7"
Jan 14 13:09:34.360883 kubelet[2558]: I0114 13:09:34.360531    2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef087c79-d22f-4fd3-9082-757761a5c25b-lib-modules\") pod \"calico-node-6bgmt\" (UID: \"ef087c79-d22f-4fd3-9082-757761a5c25b\") " pod="calico-system/calico-node-6bgmt"
Jan 14 13:09:34.463266 kubelet[2558]: E0114 13:09:34.463088    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:34.463266 kubelet[2558]: W0114 13:09:34.463122    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:34.463266 kubelet[2558]: E0114 13:09:34.463150    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:34.463897 kubelet[2558]: E0114 13:09:34.463754    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:34.463897 kubelet[2558]: W0114 13:09:34.463776    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:34.463897 kubelet[2558]: E0114 13:09:34.463796    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:34.464332 kubelet[2558]: E0114 13:09:34.464244    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:34.464332 kubelet[2558]: W0114 13:09:34.464261    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:34.464332 kubelet[2558]: E0114 13:09:34.464277    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:34.465272 kubelet[2558]: E0114 13:09:34.465187    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:34.465272 kubelet[2558]: W0114 13:09:34.465207    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:34.465586 kubelet[2558]: E0114 13:09:34.465481    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:34.465999 kubelet[2558]: E0114 13:09:34.465862    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:34.465999 kubelet[2558]: W0114 13:09:34.465882    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:34.465999 kubelet[2558]: E0114 13:09:34.465901    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:34.467306 kubelet[2558]: E0114 13:09:34.467239    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:34.467306 kubelet[2558]: W0114 13:09:34.467255    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:34.467306 kubelet[2558]: E0114 13:09:34.467280    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:34.467800 kubelet[2558]: E0114 13:09:34.467671    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:34.467800 kubelet[2558]: W0114 13:09:34.467685    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:34.467800 kubelet[2558]: E0114 13:09:34.467702    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:34.468222 kubelet[2558]: E0114 13:09:34.468061    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:34.468222 kubelet[2558]: W0114 13:09:34.468074    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:34.468222 kubelet[2558]: E0114 13:09:34.468086    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:34.468576 kubelet[2558]: E0114 13:09:34.468512    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:34.468576 kubelet[2558]: W0114 13:09:34.468527    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:34.468576 kubelet[2558]: E0114 13:09:34.468541    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:34.470112 kubelet[2558]: E0114 13:09:34.469781    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:34.470112 kubelet[2558]: W0114 13:09:34.469796    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:34.470112 kubelet[2558]: E0114 13:09:34.469810    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:34.473450 kubelet[2558]: E0114 13:09:34.473426    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:34.473560 kubelet[2558]: W0114 13:09:34.473545    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:34.474203 kubelet[2558]: E0114 13:09:34.474182    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:34.478941 kubelet[2558]: E0114 13:09:34.478921    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:34.479065 kubelet[2558]: W0114 13:09:34.479051    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:34.479149 kubelet[2558]: E0114 13:09:34.479135    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:34.482983 kubelet[2558]: E0114 13:09:34.482957    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:34.482983 kubelet[2558]: W0114 13:09:34.482983    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:34.483147 kubelet[2558]: E0114 13:09:34.483006    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:34.655206 containerd[1726]: time="2025-01-14T13:09:34.655051322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7cbm7,Uid:1cbcada8-a75b-4918-a688-739c85547347,Namespace:kube-system,Attempt:0,}"
Jan 14 13:09:34.659176 containerd[1726]: time="2025-01-14T13:09:34.659131168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6bgmt,Uid:ef087c79-d22f-4fd3-9082-757761a5c25b,Namespace:calico-system,Attempt:0,}"
Jan 14 13:09:35.334186 kubelet[2558]: E0114 13:09:35.334144    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:09:35.371089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1628729010.mount: Deactivated successfully.
Jan 14 13:09:35.410756 containerd[1726]: time="2025-01-14T13:09:35.410694998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jan 14 13:09:35.420305 containerd[1726]: time="2025-01-14T13:09:35.420244405Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064"
Jan 14 13:09:35.425702 containerd[1726]: time="2025-01-14T13:09:35.425655766Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jan 14 13:09:35.432434 containerd[1726]: time="2025-01-14T13:09:35.432386041Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jan 14 13:09:35.436671 containerd[1726]: time="2025-01-14T13:09:35.436605789Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Jan 14 13:09:35.441695 containerd[1726]: time="2025-01-14T13:09:35.441635945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jan 14 13:09:35.442730 containerd[1726]: time="2025-01-14T13:09:35.442479554Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 783.238585ms"
Jan 14 13:09:35.444305 containerd[1726]: time="2025-01-14T13:09:35.444252074Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 789.096951ms"
Jan 14 13:09:36.213027 containerd[1726]: time="2025-01-14T13:09:36.212686793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 14 13:09:36.213027 containerd[1726]: time="2025-01-14T13:09:36.212774594Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 14 13:09:36.213027 containerd[1726]: time="2025-01-14T13:09:36.212794095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 14 13:09:36.213027 containerd[1726]: time="2025-01-14T13:09:36.212920596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 14 13:09:36.220341 containerd[1726]: time="2025-01-14T13:09:36.212998697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 14 13:09:36.220341 containerd[1726]: time="2025-01-14T13:09:36.213054997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 14 13:09:36.220341 containerd[1726]: time="2025-01-14T13:09:36.213072698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 14 13:09:36.220341 containerd[1726]: time="2025-01-14T13:09:36.213155999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 14 13:09:36.335333 kubelet[2558]: E0114 13:09:36.335248    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:09:36.406413 kubelet[2558]: E0114 13:09:36.406361    2558 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cjcc6" podUID="7f807593-f91e-4011-a174-603d407a7151"
Jan 14 13:09:36.620530 systemd[1]: Started cri-containerd-166ca973e69328ed91648663e33e0757b65418652fe7f1c7ee60b21cb15eb62b.scope - libcontainer container 166ca973e69328ed91648663e33e0757b65418652fe7f1c7ee60b21cb15eb62b.
Jan 14 13:09:36.622091 systemd[1]: Started cri-containerd-ea649f41eac6b856ebfd490fd2c2c84630e39dfddd0477a3201acfc25d149b7c.scope - libcontainer container ea649f41eac6b856ebfd490fd2c2c84630e39dfddd0477a3201acfc25d149b7c.
Jan 14 13:09:36.657318 containerd[1726]: time="2025-01-14T13:09:36.657193579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7cbm7,Uid:1cbcada8-a75b-4918-a688-739c85547347,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea649f41eac6b856ebfd490fd2c2c84630e39dfddd0477a3201acfc25d149b7c\""
Jan 14 13:09:36.664379 containerd[1726]: time="2025-01-14T13:09:36.664341559Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\""
Jan 14 13:09:36.664683 containerd[1726]: time="2025-01-14T13:09:36.664643163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6bgmt,Uid:ef087c79-d22f-4fd3-9082-757761a5c25b,Namespace:calico-system,Attempt:0,} returns sandbox id \"166ca973e69328ed91648663e33e0757b65418652fe7f1c7ee60b21cb15eb62b\""
Jan 14 13:09:37.336216 kubelet[2558]: E0114 13:09:37.336178    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:09:37.740693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4078034404.mount: Deactivated successfully.
Jan 14 13:09:38.218318 containerd[1726]: time="2025-01-14T13:09:38.218256689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 14 13:09:38.222717 containerd[1726]: time="2025-01-14T13:09:38.222654238Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057478"
Jan 14 13:09:38.226112 containerd[1726]: time="2025-01-14T13:09:38.226051276Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 14 13:09:38.233015 containerd[1726]: time="2025-01-14T13:09:38.232954854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 14 13:09:38.233973 containerd[1726]: time="2025-01-14T13:09:38.233538360Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 1.5691545s"
Jan 14 13:09:38.233973 containerd[1726]: time="2025-01-14T13:09:38.233581561Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\""
Jan 14 13:09:38.235306 containerd[1726]: time="2025-01-14T13:09:38.235151878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\""
Jan 14 13:09:38.236240 containerd[1726]: time="2025-01-14T13:09:38.236209690Z" level=info msg="CreateContainer within sandbox \"ea649f41eac6b856ebfd490fd2c2c84630e39dfddd0477a3201acfc25d149b7c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Jan 14 13:09:38.295871 containerd[1726]: time="2025-01-14T13:09:38.295817759Z" level=info msg="CreateContainer within sandbox \"ea649f41eac6b856ebfd490fd2c2c84630e39dfddd0477a3201acfc25d149b7c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a9cbbba57e04bc1f1a620296a66612aa05bba17aaa505c207730076fa11cc674\""
Jan 14 13:09:38.296657 containerd[1726]: time="2025-01-14T13:09:38.296622768Z" level=info msg="StartContainer for \"a9cbbba57e04bc1f1a620296a66612aa05bba17aaa505c207730076fa11cc674\""
Jan 14 13:09:38.329505 systemd[1]: Started cri-containerd-a9cbbba57e04bc1f1a620296a66612aa05bba17aaa505c207730076fa11cc674.scope - libcontainer container a9cbbba57e04bc1f1a620296a66612aa05bba17aaa505c207730076fa11cc674.
Jan 14 13:09:38.336829 kubelet[2558]: E0114 13:09:38.336778    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:09:38.362961 containerd[1726]: time="2025-01-14T13:09:38.362916111Z" level=info msg="StartContainer for \"a9cbbba57e04bc1f1a620296a66612aa05bba17aaa505c207730076fa11cc674\" returns successfully"
Jan 14 13:09:38.407703 kubelet[2558]: E0114 13:09:38.407162    2558 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cjcc6" podUID="7f807593-f91e-4011-a174-603d407a7151"
Jan 14 13:09:38.439051 kubelet[2558]: I0114 13:09:38.438990    2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7cbm7" podStartSLOduration=4.866414725 podStartE2EDuration="6.438973464s" podCreationTimestamp="2025-01-14 13:09:32 +0000 UTC" firstStartedPulling="2025-01-14 13:09:36.662003933 +0000 UTC m=+5.419939450" lastFinishedPulling="2025-01-14 13:09:38.234562772 +0000 UTC m=+6.992498189" observedRunningTime="2025-01-14 13:09:38.438867463 +0000 UTC m=+7.196802880" watchObservedRunningTime="2025-01-14 13:09:38.438973464 +0000 UTC m=+7.196908981"
Jan 14 13:09:38.478532 kubelet[2558]: E0114 13:09:38.477717    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.478532 kubelet[2558]: W0114 13:09:38.477741    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.478532 kubelet[2558]: E0114 13:09:38.477766    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.478532 kubelet[2558]: E0114 13:09:38.478042    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.478532 kubelet[2558]: W0114 13:09:38.478055    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.478532 kubelet[2558]: E0114 13:09:38.478072    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.478532 kubelet[2558]: E0114 13:09:38.478315    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.478532 kubelet[2558]: W0114 13:09:38.478327    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.478532 kubelet[2558]: E0114 13:09:38.478339    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.479027 kubelet[2558]: E0114 13:09:38.478573    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.479027 kubelet[2558]: W0114 13:09:38.478584    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.479027 kubelet[2558]: E0114 13:09:38.478597    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.479027 kubelet[2558]: E0114 13:09:38.478832    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.479027 kubelet[2558]: W0114 13:09:38.478842    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.479027 kubelet[2558]: E0114 13:09:38.478855    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.479277 kubelet[2558]: E0114 13:09:38.479071    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.479277 kubelet[2558]: W0114 13:09:38.479081    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.479277 kubelet[2558]: E0114 13:09:38.479092    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.479475 kubelet[2558]: E0114 13:09:38.479314    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.479475 kubelet[2558]: W0114 13:09:38.479325    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.479475 kubelet[2558]: E0114 13:09:38.479337    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.479604 kubelet[2558]: E0114 13:09:38.479573    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.479604 kubelet[2558]: W0114 13:09:38.479583    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.479604 kubelet[2558]: E0114 13:09:38.479594    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.479890 kubelet[2558]: E0114 13:09:38.479818    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.479890 kubelet[2558]: W0114 13:09:38.479832    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.479890 kubelet[2558]: E0114 13:09:38.479845    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.480083 kubelet[2558]: E0114 13:09:38.480068    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.480083 kubelet[2558]: W0114 13:09:38.480078    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.480167 kubelet[2558]: E0114 13:09:38.480104    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.480341 kubelet[2558]: E0114 13:09:38.480323    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.480341 kubelet[2558]: W0114 13:09:38.480336    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.480483 kubelet[2558]: E0114 13:09:38.480348    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.480563 kubelet[2558]: E0114 13:09:38.480545    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.480563 kubelet[2558]: W0114 13:09:38.480558    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.480660 kubelet[2558]: E0114 13:09:38.480571    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.480783 kubelet[2558]: E0114 13:09:38.480768    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.480783 kubelet[2558]: W0114 13:09:38.480780    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.480928 kubelet[2558]: E0114 13:09:38.480792    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.480986 kubelet[2558]: E0114 13:09:38.480967    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.480986 kubelet[2558]: W0114 13:09:38.480977    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.481105 kubelet[2558]: E0114 13:09:38.480995    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.481192 kubelet[2558]: E0114 13:09:38.481169    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.481192 kubelet[2558]: W0114 13:09:38.481179    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.481312 kubelet[2558]: E0114 13:09:38.481191    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.481476 kubelet[2558]: E0114 13:09:38.481458    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.481476 kubelet[2558]: W0114 13:09:38.481473    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.481663 kubelet[2558]: E0114 13:09:38.481487    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.481740 kubelet[2558]: E0114 13:09:38.481683    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.481740 kubelet[2558]: W0114 13:09:38.481695    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.481740 kubelet[2558]: E0114 13:09:38.481707    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.481923 kubelet[2558]: E0114 13:09:38.481884    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.481923 kubelet[2558]: W0114 13:09:38.481895    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.481923 kubelet[2558]: E0114 13:09:38.481907    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.482088 kubelet[2558]: E0114 13:09:38.482075    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.482088 kubelet[2558]: W0114 13:09:38.482084    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.482231 kubelet[2558]: E0114 13:09:38.482095    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.482321 kubelet[2558]: E0114 13:09:38.482304    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.482321 kubelet[2558]: W0114 13:09:38.482315    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.482415 kubelet[2558]: E0114 13:09:38.482328    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.484541 kubelet[2558]: E0114 13:09:38.484523    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.484541 kubelet[2558]: W0114 13:09:38.484537    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.484652 kubelet[2558]: E0114 13:09:38.484551    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.484822 kubelet[2558]: E0114 13:09:38.484806    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.484822 kubelet[2558]: W0114 13:09:38.484819    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.484931 kubelet[2558]: E0114 13:09:38.484837    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.485072 kubelet[2558]: E0114 13:09:38.485056    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.485072 kubelet[2558]: W0114 13:09:38.485069    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.485180 kubelet[2558]: E0114 13:09:38.485087    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.485391 kubelet[2558]: E0114 13:09:38.485374    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.485391 kubelet[2558]: W0114 13:09:38.485387    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.485486 kubelet[2558]: E0114 13:09:38.485405    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.485626 kubelet[2558]: E0114 13:09:38.485610    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.485626 kubelet[2558]: W0114 13:09:38.485623    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.485736 kubelet[2558]: E0114 13:09:38.485640    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.485892 kubelet[2558]: E0114 13:09:38.485875    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.485892 kubelet[2558]: W0114 13:09:38.485888    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.486000 kubelet[2558]: E0114 13:09:38.485975    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.486307 kubelet[2558]: E0114 13:09:38.486277    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.486307 kubelet[2558]: W0114 13:09:38.486303    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.486433 kubelet[2558]: E0114 13:09:38.486322    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.486562 kubelet[2558]: E0114 13:09:38.486545    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.486562 kubelet[2558]: W0114 13:09:38.486558    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.486662 kubelet[2558]: E0114 13:09:38.486577    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.486802 kubelet[2558]: E0114 13:09:38.486786    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.486802 kubelet[2558]: W0114 13:09:38.486798    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.486908 kubelet[2558]: E0114 13:09:38.486824    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.487064 kubelet[2558]: E0114 13:09:38.487047    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.487064 kubelet[2558]: W0114 13:09:38.487060    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.487163 kubelet[2558]: E0114 13:09:38.487079    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.487484 kubelet[2558]: E0114 13:09:38.487467    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.487484 kubelet[2558]: W0114 13:09:38.487480    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.487679 kubelet[2558]: E0114 13:09:38.487542    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:38.487679 kubelet[2558]: E0114 13:09:38.487662    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:38.487679 kubelet[2558]: W0114 13:09:38.487671    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:38.487770 kubelet[2558]: E0114 13:09:38.487682    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.337214 kubelet[2558]: E0114 13:09:39.337150    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:09:39.489399 kubelet[2558]: E0114 13:09:39.489367    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.489399 kubelet[2558]: W0114 13:09:39.489388    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.489617 kubelet[2558]: E0114 13:09:39.489411    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.489668 kubelet[2558]: E0114 13:09:39.489645    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.489668 kubelet[2558]: W0114 13:09:39.489656    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.489757 kubelet[2558]: E0114 13:09:39.489670    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.489912 kubelet[2558]: E0114 13:09:39.489892    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.489912 kubelet[2558]: W0114 13:09:39.489906    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.490045 kubelet[2558]: E0114 13:09:39.489922    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.490142 kubelet[2558]: E0114 13:09:39.490126    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.490142 kubelet[2558]: W0114 13:09:39.490139    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.490239 kubelet[2558]: E0114 13:09:39.490152    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.490384 kubelet[2558]: E0114 13:09:39.490369    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.490384 kubelet[2558]: W0114 13:09:39.490382    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.490497 kubelet[2558]: E0114 13:09:39.490398    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.490594 kubelet[2558]: E0114 13:09:39.490580    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.490594 kubelet[2558]: W0114 13:09:39.490592    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.490700 kubelet[2558]: E0114 13:09:39.490604    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.490794 kubelet[2558]: E0114 13:09:39.490778    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.490845 kubelet[2558]: W0114 13:09:39.490793    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.490845 kubelet[2558]: E0114 13:09:39.490806    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.491002 kubelet[2558]: E0114 13:09:39.490988    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.491002 kubelet[2558]: W0114 13:09:39.490999    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.491093 kubelet[2558]: E0114 13:09:39.491012    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.491221 kubelet[2558]: E0114 13:09:39.491207    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.491221 kubelet[2558]: W0114 13:09:39.491219    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.491339 kubelet[2558]: E0114 13:09:39.491231    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.491549 kubelet[2558]: E0114 13:09:39.491417    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.491549 kubelet[2558]: W0114 13:09:39.491427    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.491549 kubelet[2558]: E0114 13:09:39.491439    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.491735 kubelet[2558]: E0114 13:09:39.491610    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.491735 kubelet[2558]: W0114 13:09:39.491620    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.491735 kubelet[2558]: E0114 13:09:39.491632    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.491857 kubelet[2558]: E0114 13:09:39.491808    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.491857 kubelet[2558]: W0114 13:09:39.491819    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.491857 kubelet[2558]: E0114 13:09:39.491830    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.492018 kubelet[2558]: E0114 13:09:39.492008    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.492066 kubelet[2558]: W0114 13:09:39.492019    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.492066 kubelet[2558]: E0114 13:09:39.492031    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.492222 kubelet[2558]: E0114 13:09:39.492208    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.492222 kubelet[2558]: W0114 13:09:39.492220    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.492466 kubelet[2558]: E0114 13:09:39.492231    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.492466 kubelet[2558]: E0114 13:09:39.492424    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.492466 kubelet[2558]: W0114 13:09:39.492437    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.492466 kubelet[2558]: E0114 13:09:39.492448    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.493277 kubelet[2558]: E0114 13:09:39.492621    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.493277 kubelet[2558]: W0114 13:09:39.492631    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.493277 kubelet[2558]: E0114 13:09:39.492643    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.493277 kubelet[2558]: E0114 13:09:39.492944    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.493277 kubelet[2558]: W0114 13:09:39.492952    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.493277 kubelet[2558]: E0114 13:09:39.492962    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.493277 kubelet[2558]: E0114 13:09:39.493139    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.493277 kubelet[2558]: W0114 13:09:39.493149    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.493277 kubelet[2558]: E0114 13:09:39.493163    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.494919 kubelet[2558]: E0114 13:09:39.493380    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.494919 kubelet[2558]: W0114 13:09:39.493389    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.494919 kubelet[2558]: E0114 13:09:39.493398    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.494919 kubelet[2558]: E0114 13:09:39.493573    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.494919 kubelet[2558]: W0114 13:09:39.493581    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.494919 kubelet[2558]: E0114 13:09:39.493590    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.494919 kubelet[2558]: E0114 13:09:39.493823    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.494919 kubelet[2558]: W0114 13:09:39.493832    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.494919 kubelet[2558]: E0114 13:09:39.493841    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.494919 kubelet[2558]: E0114 13:09:39.494029    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.495338 kubelet[2558]: W0114 13:09:39.494038    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.495338 kubelet[2558]: E0114 13:09:39.494052    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.495338 kubelet[2558]: E0114 13:09:39.494244    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.495338 kubelet[2558]: W0114 13:09:39.494252    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.495338 kubelet[2558]: E0114 13:09:39.494266    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.495338 kubelet[2558]: E0114 13:09:39.494478    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.495338 kubelet[2558]: W0114 13:09:39.494489    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.495338 kubelet[2558]: E0114 13:09:39.494511    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.495338 kubelet[2558]: E0114 13:09:39.494696    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.495338 kubelet[2558]: W0114 13:09:39.494706    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.495714 kubelet[2558]: E0114 13:09:39.494725    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.495714 kubelet[2558]: E0114 13:09:39.494902    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.495714 kubelet[2558]: W0114 13:09:39.494912    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.495714 kubelet[2558]: E0114 13:09:39.494932    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.495714 kubelet[2558]: E0114 13:09:39.495135    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.495714 kubelet[2558]: W0114 13:09:39.495145    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.495714 kubelet[2558]: E0114 13:09:39.495157    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.495714 kubelet[2558]: E0114 13:09:39.495653    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.495714 kubelet[2558]: W0114 13:09:39.495665    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.495714 kubelet[2558]: E0114 13:09:39.495677    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.496105 kubelet[2558]: E0114 13:09:39.495855    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.496105 kubelet[2558]: W0114 13:09:39.495865    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.496105 kubelet[2558]: E0114 13:09:39.495877    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.496105 kubelet[2558]: E0114 13:09:39.496031    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.496105 kubelet[2558]: W0114 13:09:39.496040    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.496105 kubelet[2558]: E0114 13:09:39.496050    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.496382 kubelet[2558]: E0114 13:09:39.496230    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.496382 kubelet[2558]: W0114 13:09:39.496239    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.496382 kubelet[2558]: E0114 13:09:39.496252    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.496634 kubelet[2558]: E0114 13:09:39.496618    2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 14 13:09:39.496634 kubelet[2558]: W0114 13:09:39.496631    2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 14 13:09:39.496723 kubelet[2558]: E0114 13:09:39.496644    2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 14 13:09:39.745469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1646566654.mount: Deactivated successfully.
Jan 14 13:09:39.899761 containerd[1726]: time="2025-01-14T13:09:39.899705395Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 14 13:09:39.904136 containerd[1726]: time="2025-01-14T13:09:39.904066042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343"
Jan 14 13:09:39.907507 containerd[1726]: time="2025-01-14T13:09:39.907448878Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 14 13:09:39.912787 containerd[1726]: time="2025-01-14T13:09:39.912728935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 14 13:09:39.913791 containerd[1726]: time="2025-01-14T13:09:39.913320341Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.678134663s"
Jan 14 13:09:39.913791 containerd[1726]: time="2025-01-14T13:09:39.913367041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\""
Jan 14 13:09:39.915785 containerd[1726]: time="2025-01-14T13:09:39.915758267Z" level=info msg="CreateContainer within sandbox \"166ca973e69328ed91648663e33e0757b65418652fe7f1c7ee60b21cb15eb62b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}"
Jan 14 13:09:39.963046 containerd[1726]: time="2025-01-14T13:09:39.962989074Z" level=info msg="CreateContainer within sandbox \"166ca973e69328ed91648663e33e0757b65418652fe7f1c7ee60b21cb15eb62b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3301bcecc3789d11dd1564784f282ceda805ee972e9e9ad0af11c379fa305c19\""
Jan 14 13:09:39.963799 containerd[1726]: time="2025-01-14T13:09:39.963600681Z" level=info msg="StartContainer for \"3301bcecc3789d11dd1564784f282ceda805ee972e9e9ad0af11c379fa305c19\""
Jan 14 13:09:39.997443 systemd[1]: Started cri-containerd-3301bcecc3789d11dd1564784f282ceda805ee972e9e9ad0af11c379fa305c19.scope - libcontainer container 3301bcecc3789d11dd1564784f282ceda805ee972e9e9ad0af11c379fa305c19.
Jan 14 13:09:40.028755 containerd[1726]: time="2025-01-14T13:09:40.028581678Z" level=info msg="StartContainer for \"3301bcecc3789d11dd1564784f282ceda805ee972e9e9ad0af11c379fa305c19\" returns successfully"
Jan 14 13:09:40.035472 systemd[1]: cri-containerd-3301bcecc3789d11dd1564784f282ceda805ee972e9e9ad0af11c379fa305c19.scope: Deactivated successfully.
Jan 14 13:09:40.056060 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3301bcecc3789d11dd1564784f282ceda805ee972e9e9ad0af11c379fa305c19-rootfs.mount: Deactivated successfully.
Jan 14 13:09:40.338061 kubelet[2558]: E0114 13:09:40.337996    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:09:40.407497 kubelet[2558]: E0114 13:09:40.407037    2558 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cjcc6" podUID="7f807593-f91e-4011-a174-603d407a7151"
Jan 14 13:09:40.953597 containerd[1726]: time="2025-01-14T13:09:40.953523103Z" level=info msg="shim disconnected" id=3301bcecc3789d11dd1564784f282ceda805ee972e9e9ad0af11c379fa305c19 namespace=k8s.io
Jan 14 13:09:40.954172 containerd[1726]: time="2025-01-14T13:09:40.953655205Z" level=warning msg="cleaning up after shim disconnected" id=3301bcecc3789d11dd1564784f282ceda805ee972e9e9ad0af11c379fa305c19 namespace=k8s.io
Jan 14 13:09:40.954172 containerd[1726]: time="2025-01-14T13:09:40.953674605Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 14 13:09:41.338944 kubelet[2558]: E0114 13:09:41.338867    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:09:41.439471 containerd[1726]: time="2025-01-14T13:09:41.439392017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\""
Jan 14 13:09:42.339956 kubelet[2558]: E0114 13:09:42.339922    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:09:42.407096 kubelet[2558]: E0114 13:09:42.406600    2558 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cjcc6" podUID="7f807593-f91e-4011-a174-603d407a7151"
Jan 14 13:09:43.341055 kubelet[2558]: E0114 13:09:43.340994    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:09:44.341558 kubelet[2558]: E0114 13:09:44.341507    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:09:44.408497 kubelet[2558]: E0114 13:09:44.407999    2558 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cjcc6" podUID="7f807593-f91e-4011-a174-603d407a7151"
Jan 14 13:09:45.342251 kubelet[2558]: E0114 13:09:45.342171    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:09:45.543624 containerd[1726]: time="2025-01-14T13:09:45.543558059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 14 13:09:45.547578 containerd[1726]: time="2025-01-14T13:09:45.547502501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154"
Jan 14 13:09:45.551723 containerd[1726]: time="2025-01-14T13:09:45.551665046Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 14 13:09:45.558070 containerd[1726]: time="2025-01-14T13:09:45.558012214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 14 13:09:45.559104 containerd[1726]: time="2025-01-14T13:09:45.558652121Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.119212002s"
Jan 14 13:09:45.559104 containerd[1726]: time="2025-01-14T13:09:45.558693021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\""
Jan 14 13:09:45.560998 containerd[1726]: time="2025-01-14T13:09:45.560970145Z" level=info msg="CreateContainer within sandbox \"166ca973e69328ed91648663e33e0757b65418652fe7f1c7ee60b21cb15eb62b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}"
Jan 14 13:09:45.605669 containerd[1726]: time="2025-01-14T13:09:45.605065719Z" level=info msg="CreateContainer within sandbox \"166ca973e69328ed91648663e33e0757b65418652fe7f1c7ee60b21cb15eb62b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"10cd8fa6c193137e5b20ab612822a38f26965efe310cc4b70d6f3a1a1cd3dcfc\""
Jan 14 13:09:45.606760 containerd[1726]: time="2025-01-14T13:09:45.606263331Z" level=info msg="StartContainer for \"10cd8fa6c193137e5b20ab612822a38f26965efe310cc4b70d6f3a1a1cd3dcfc\""
Jan 14 13:09:45.639561 systemd[1]: Started cri-containerd-10cd8fa6c193137e5b20ab612822a38f26965efe310cc4b70d6f3a1a1cd3dcfc.scope - libcontainer container 10cd8fa6c193137e5b20ab612822a38f26965efe310cc4b70d6f3a1a1cd3dcfc.
Jan 14 13:09:45.672571 containerd[1726]: time="2025-01-14T13:09:45.672529043Z" level=info msg="StartContainer for \"10cd8fa6c193137e5b20ab612822a38f26965efe310cc4b70d6f3a1a1cd3dcfc\" returns successfully"
Jan 14 13:09:46.342769 kubelet[2558]: E0114 13:09:46.342693    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:09:46.407806 kubelet[2558]: E0114 13:09:46.407321    2558 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cjcc6" podUID="7f807593-f91e-4011-a174-603d407a7151"
Jan 14 13:09:47.037032 containerd[1726]: time="2025-01-14T13:09:47.036968084Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE         \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Jan 14 13:09:47.039077 systemd[1]: cri-containerd-10cd8fa6c193137e5b20ab612822a38f26965efe310cc4b70d6f3a1a1cd3dcfc.scope: Deactivated successfully.
Jan 14 13:09:47.041833 kubelet[2558]: I0114 13:09:47.041807    2558 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
Jan 14 13:09:47.065332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10cd8fa6c193137e5b20ab612822a38f26965efe310cc4b70d6f3a1a1cd3dcfc-rootfs.mount: Deactivated successfully.
Jan 14 13:09:47.657490 kubelet[2558]: E0114 13:09:47.343373    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:09:48.344619 kubelet[2558]: E0114 13:09:48.344558    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:09:48.413489 systemd[1]: Created slice kubepods-besteffort-pod7f807593_f91e_4011_a174_603d407a7151.slice - libcontainer container kubepods-besteffort-pod7f807593_f91e_4011_a174_603d407a7151.slice.
Jan 14 13:09:48.416035 containerd[1726]: time="2025-01-14T13:09:48.416000697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cjcc6,Uid:7f807593-f91e-4011-a174-603d407a7151,Namespace:calico-system,Attempt:0,}"
Jan 14 13:09:48.768684 containerd[1726]: time="2025-01-14T13:09:48.768610399Z" level=info msg="shim disconnected" id=10cd8fa6c193137e5b20ab612822a38f26965efe310cc4b70d6f3a1a1cd3dcfc namespace=k8s.io
Jan 14 13:09:48.768684 containerd[1726]: time="2025-01-14T13:09:48.768677700Z" level=warning msg="cleaning up after shim disconnected" id=10cd8fa6c193137e5b20ab612822a38f26965efe310cc4b70d6f3a1a1cd3dcfc namespace=k8s.io
Jan 14 13:09:48.768684 containerd[1726]: time="2025-01-14T13:09:48.768690900Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 14 13:09:48.851033 containerd[1726]: time="2025-01-14T13:09:48.850973391Z" level=error msg="Failed to destroy network for sandbox \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:48.853102 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374-shm.mount: Deactivated successfully.
Jan 14 13:09:48.853441 containerd[1726]: time="2025-01-14T13:09:48.853398326Z" level=error msg="encountered an error cleaning up failed sandbox \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:48.854379 containerd[1726]: time="2025-01-14T13:09:48.853518428Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cjcc6,Uid:7f807593-f91e-4011-a174-603d407a7151,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:48.854492 kubelet[2558]: E0114 13:09:48.853854    2558 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:48.854492 kubelet[2558]: E0114 13:09:48.853940    2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cjcc6"
Jan 14 13:09:48.854492 kubelet[2558]: E0114 13:09:48.853967    2558 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cjcc6"
Jan 14 13:09:48.854910 kubelet[2558]: E0114 13:09:48.854020    2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cjcc6_calico-system(7f807593-f91e-4011-a174-603d407a7151)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cjcc6_calico-system(7f807593-f91e-4011-a174-603d407a7151)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cjcc6" podUID="7f807593-f91e-4011-a174-603d407a7151"
Jan 14 13:09:49.344851 kubelet[2558]: E0114 13:09:49.344784    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:09:49.455418 kubelet[2558]: I0114 13:09:49.455360    2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374"
Jan 14 13:09:49.456157 containerd[1726]: time="2025-01-14T13:09:49.456099447Z" level=info msg="StopPodSandbox for \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\""
Jan 14 13:09:49.457217 containerd[1726]: time="2025-01-14T13:09:49.456410952Z" level=info msg="Ensure that sandbox 90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374 in task-service has been cleanup successfully"
Jan 14 13:09:49.457217 containerd[1726]: time="2025-01-14T13:09:49.456845858Z" level=info msg="TearDown network for sandbox \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\" successfully"
Jan 14 13:09:49.457217 containerd[1726]: time="2025-01-14T13:09:49.456871358Z" level=info msg="StopPodSandbox for \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\" returns successfully"
Jan 14 13:09:49.461344 containerd[1726]: time="2025-01-14T13:09:49.459952503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cjcc6,Uid:7f807593-f91e-4011-a174-603d407a7151,Namespace:calico-system,Attempt:1,}"
Jan 14 13:09:49.460375 systemd[1]: run-netns-cni\x2dde445f68\x2d47be\x2d2207\x2d787b\x2d2114e34679c3.mount: Deactivated successfully.
Jan 14 13:09:49.463953 containerd[1726]: time="2025-01-14T13:09:49.463885760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\""
Jan 14 13:09:49.562555 containerd[1726]: time="2025-01-14T13:09:49.562501987Z" level=error msg="Failed to destroy network for sandbox \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:49.562860 containerd[1726]: time="2025-01-14T13:09:49.562823391Z" level=error msg="encountered an error cleaning up failed sandbox \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:49.562954 containerd[1726]: time="2025-01-14T13:09:49.562901193Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cjcc6,Uid:7f807593-f91e-4011-a174-603d407a7151,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:49.563169 kubelet[2558]: E0114 13:09:49.563136    2558 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:49.563257 kubelet[2558]: E0114 13:09:49.563196    2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cjcc6"
Jan 14 13:09:49.563257 kubelet[2558]: E0114 13:09:49.563224    2558 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cjcc6"
Jan 14 13:09:49.563366 kubelet[2558]: E0114 13:09:49.563279    2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cjcc6_calico-system(7f807593-f91e-4011-a174-603d407a7151)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cjcc6_calico-system(7f807593-f91e-4011-a174-603d407a7151)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cjcc6" podUID="7f807593-f91e-4011-a174-603d407a7151"
Jan 14 13:09:49.794034 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad-shm.mount: Deactivated successfully.
Jan 14 13:09:50.345058 kubelet[2558]: E0114 13:09:50.345000    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:09:50.467064 kubelet[2558]: I0114 13:09:50.467011    2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad"
Jan 14 13:09:50.468006 containerd[1726]: time="2025-01-14T13:09:50.467886888Z" level=info msg="StopPodSandbox for \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\""
Jan 14 13:09:50.470555 containerd[1726]: time="2025-01-14T13:09:50.468252393Z" level=info msg="Ensure that sandbox 1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad in task-service has been cleanup successfully"
Jan 14 13:09:50.470555 containerd[1726]: time="2025-01-14T13:09:50.468511397Z" level=info msg="TearDown network for sandbox \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\" successfully"
Jan 14 13:09:50.470555 containerd[1726]: time="2025-01-14T13:09:50.468535697Z" level=info msg="StopPodSandbox for \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\" returns successfully"
Jan 14 13:09:50.471117 systemd[1]: run-netns-cni\x2d1ad9ac3b\x2d8650\x2da8c3\x2d4d8a\x2d073f7601dba1.mount: Deactivated successfully.
Jan 14 13:09:50.472764 containerd[1726]: time="2025-01-14T13:09:50.471129835Z" level=info msg="StopPodSandbox for \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\""
Jan 14 13:09:50.472764 containerd[1726]: time="2025-01-14T13:09:50.471243536Z" level=info msg="TearDown network for sandbox \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\" successfully"
Jan 14 13:09:50.472764 containerd[1726]: time="2025-01-14T13:09:50.471261037Z" level=info msg="StopPodSandbox for \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\" returns successfully"
Jan 14 13:09:50.473593 containerd[1726]: time="2025-01-14T13:09:50.473201765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cjcc6,Uid:7f807593-f91e-4011-a174-603d407a7151,Namespace:calico-system,Attempt:2,}"
Jan 14 13:09:50.575940 containerd[1726]: time="2025-01-14T13:09:50.575884751Z" level=error msg="Failed to destroy network for sandbox \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:50.578316 containerd[1726]: time="2025-01-14T13:09:50.576316157Z" level=error msg="encountered an error cleaning up failed sandbox \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:50.578316 containerd[1726]: time="2025-01-14T13:09:50.576414258Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cjcc6,Uid:7f807593-f91e-4011-a174-603d407a7151,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:50.578250 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24-shm.mount: Deactivated successfully.
Jan 14 13:09:50.578653 kubelet[2558]: E0114 13:09:50.576759    2558 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:50.578653 kubelet[2558]: E0114 13:09:50.576842    2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cjcc6"
Jan 14 13:09:50.578653 kubelet[2558]: E0114 13:09:50.576873    2558 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cjcc6"
Jan 14 13:09:50.578799 kubelet[2558]: E0114 13:09:50.576970    2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cjcc6_calico-system(7f807593-f91e-4011-a174-603d407a7151)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cjcc6_calico-system(7f807593-f91e-4011-a174-603d407a7151)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cjcc6" podUID="7f807593-f91e-4011-a174-603d407a7151"
Jan 14 13:09:51.345465 kubelet[2558]: E0114 13:09:51.345411    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:09:51.471320 kubelet[2558]: I0114 13:09:51.470441    2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24"
Jan 14 13:09:51.471491 containerd[1726]: time="2025-01-14T13:09:51.471350208Z" level=info msg="StopPodSandbox for \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\""
Jan 14 13:09:51.471905 containerd[1726]: time="2025-01-14T13:09:51.471662713Z" level=info msg="Ensure that sandbox 842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24 in task-service has been cleanup successfully"
Jan 14 13:09:51.471954 containerd[1726]: time="2025-01-14T13:09:51.471899316Z" level=info msg="TearDown network for sandbox \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\" successfully"
Jan 14 13:09:51.471954 containerd[1726]: time="2025-01-14T13:09:51.471919716Z" level=info msg="StopPodSandbox for \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\" returns successfully"
Jan 14 13:09:51.476142 containerd[1726]: time="2025-01-14T13:09:51.472333622Z" level=info msg="StopPodSandbox for \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\""
Jan 14 13:09:51.476142 containerd[1726]: time="2025-01-14T13:09:51.472424924Z" level=info msg="TearDown network for sandbox \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\" successfully"
Jan 14 13:09:51.476142 containerd[1726]: time="2025-01-14T13:09:51.472437524Z" level=info msg="StopPodSandbox for \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\" returns successfully"
Jan 14 13:09:51.476142 containerd[1726]: time="2025-01-14T13:09:51.472699828Z" level=info msg="StopPodSandbox for \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\""
Jan 14 13:09:51.476142 containerd[1726]: time="2025-01-14T13:09:51.472960731Z" level=info msg="TearDown network for sandbox \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\" successfully"
Jan 14 13:09:51.476142 containerd[1726]: time="2025-01-14T13:09:51.472980332Z" level=info msg="StopPodSandbox for \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\" returns successfully"
Jan 14 13:09:51.476142 containerd[1726]: time="2025-01-14T13:09:51.473832844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cjcc6,Uid:7f807593-f91e-4011-a174-603d407a7151,Namespace:calico-system,Attempt:3,}"
Jan 14 13:09:51.475865 systemd[1]: run-netns-cni\x2d5191ea98\x2d019e\x2de27f\x2d2464\x2d68e0d182ab59.mount: Deactivated successfully.
Jan 14 13:09:51.624224 containerd[1726]: time="2025-01-14T13:09:51.623966417Z" level=error msg="Failed to destroy network for sandbox \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:51.625641 containerd[1726]: time="2025-01-14T13:09:51.624678027Z" level=error msg="encountered an error cleaning up failed sandbox \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:51.625641 containerd[1726]: time="2025-01-14T13:09:51.624767028Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cjcc6,Uid:7f807593-f91e-4011-a174-603d407a7151,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:51.625835 kubelet[2558]: E0114 13:09:51.625023    2558 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:51.625835 kubelet[2558]: E0114 13:09:51.625098    2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cjcc6"
Jan 14 13:09:51.625835 kubelet[2558]: E0114 13:09:51.625168    2558 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cjcc6"
Jan 14 13:09:51.625972 kubelet[2558]: E0114 13:09:51.625247    2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cjcc6_calico-system(7f807593-f91e-4011-a174-603d407a7151)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cjcc6_calico-system(7f807593-f91e-4011-a174-603d407a7151)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cjcc6" podUID="7f807593-f91e-4011-a174-603d407a7151"
Jan 14 13:09:51.628143 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6-shm.mount: Deactivated successfully.
Jan 14 13:09:52.330668 kubelet[2558]: E0114 13:09:52.330585    2558 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:09:52.346128 kubelet[2558]: E0114 13:09:52.346077    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:09:52.473728 kubelet[2558]: I0114 13:09:52.473697    2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6"
Jan 14 13:09:52.474549 containerd[1726]: time="2025-01-14T13:09:52.474509324Z" level=info msg="StopPodSandbox for \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\""
Jan 14 13:09:52.477485 containerd[1726]: time="2025-01-14T13:09:52.474755428Z" level=info msg="Ensure that sandbox cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6 in task-service has been cleanup successfully"
Jan 14 13:09:52.477485 containerd[1726]: time="2025-01-14T13:09:52.476771357Z" level=info msg="TearDown network for sandbox \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\" successfully"
Jan 14 13:09:52.477485 containerd[1726]: time="2025-01-14T13:09:52.476795057Z" level=info msg="StopPodSandbox for \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\" returns successfully"
Jan 14 13:09:52.477485 containerd[1726]: time="2025-01-14T13:09:52.477094962Z" level=info msg="StopPodSandbox for \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\""
Jan 14 13:09:52.477485 containerd[1726]: time="2025-01-14T13:09:52.477175063Z" level=info msg="TearDown network for sandbox \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\" successfully"
Jan 14 13:09:52.477485 containerd[1726]: time="2025-01-14T13:09:52.477185163Z" level=info msg="StopPodSandbox for \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\" returns successfully"
Jan 14 13:09:52.476810 systemd[1]: run-netns-cni\x2dfae427f3\x2d1e0b\x2db85d\x2d7d1e\x2de35fd68074b0.mount: Deactivated successfully.
Jan 14 13:09:52.478507 containerd[1726]: time="2025-01-14T13:09:52.477839772Z" level=info msg="StopPodSandbox for \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\""
Jan 14 13:09:52.478507 containerd[1726]: time="2025-01-14T13:09:52.477957474Z" level=info msg="TearDown network for sandbox \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\" successfully"
Jan 14 13:09:52.478507 containerd[1726]: time="2025-01-14T13:09:52.477974074Z" level=info msg="StopPodSandbox for \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\" returns successfully"
Jan 14 13:09:52.479743 containerd[1726]: time="2025-01-14T13:09:52.478958388Z" level=info msg="StopPodSandbox for \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\""
Jan 14 13:09:52.479743 containerd[1726]: time="2025-01-14T13:09:52.479042490Z" level=info msg="TearDown network for sandbox \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\" successfully"
Jan 14 13:09:52.479743 containerd[1726]: time="2025-01-14T13:09:52.479056290Z" level=info msg="StopPodSandbox for \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\" returns successfully"
Jan 14 13:09:52.479743 containerd[1726]: time="2025-01-14T13:09:52.479484696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cjcc6,Uid:7f807593-f91e-4011-a174-603d407a7151,Namespace:calico-system,Attempt:4,}"
Jan 14 13:09:53.304186 containerd[1726]: time="2025-01-14T13:09:53.304088428Z" level=error msg="Failed to destroy network for sandbox \"4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:53.305438 containerd[1726]: time="2025-01-14T13:09:53.304623936Z" level=error msg="encountered an error cleaning up failed sandbox \"4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:53.305438 containerd[1726]: time="2025-01-14T13:09:53.304778638Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cjcc6,Uid:7f807593-f91e-4011-a174-603d407a7151,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:53.305590 kubelet[2558]: E0114 13:09:53.305042    2558 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:53.305590 kubelet[2558]: E0114 13:09:53.305097    2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cjcc6"
Jan 14 13:09:53.305590 kubelet[2558]: E0114 13:09:53.305121    2558 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cjcc6"
Jan 14 13:09:53.305692 kubelet[2558]: E0114 13:09:53.305164    2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cjcc6_calico-system(7f807593-f91e-4011-a174-603d407a7151)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cjcc6_calico-system(7f807593-f91e-4011-a174-603d407a7151)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cjcc6" podUID="7f807593-f91e-4011-a174-603d407a7151"
Jan 14 13:09:53.309733 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561-shm.mount: Deactivated successfully.
Jan 14 13:09:53.346884 kubelet[2558]: E0114 13:09:53.346838    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:09:53.479156 kubelet[2558]: I0114 13:09:53.479006    2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561"
Jan 14 13:09:53.480150 containerd[1726]: time="2025-01-14T13:09:53.480025674Z" level=info msg="StopPodSandbox for \"4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561\""
Jan 14 13:09:53.480686 containerd[1726]: time="2025-01-14T13:09:53.480400680Z" level=info msg="Ensure that sandbox 4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561 in task-service has been cleanup successfully"
Jan 14 13:09:53.480686 containerd[1726]: time="2025-01-14T13:09:53.480608583Z" level=info msg="TearDown network for sandbox \"4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561\" successfully"
Jan 14 13:09:53.480686 containerd[1726]: time="2025-01-14T13:09:53.480627183Z" level=info msg="StopPodSandbox for \"4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561\" returns successfully"
Jan 14 13:09:53.485316 containerd[1726]: time="2025-01-14T13:09:53.480891787Z" level=info msg="StopPodSandbox for \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\""
Jan 14 13:09:53.485316 containerd[1726]: time="2025-01-14T13:09:53.480982688Z" level=info msg="TearDown network for sandbox \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\" successfully"
Jan 14 13:09:53.485316 containerd[1726]: time="2025-01-14T13:09:53.480996188Z" level=info msg="StopPodSandbox for \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\" returns successfully"
Jan 14 13:09:53.485316 containerd[1726]: time="2025-01-14T13:09:53.481400994Z" level=info msg="StopPodSandbox for \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\""
Jan 14 13:09:53.485316 containerd[1726]: time="2025-01-14T13:09:53.481480295Z" level=info msg="TearDown network for sandbox \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\" successfully"
Jan 14 13:09:53.485316 containerd[1726]: time="2025-01-14T13:09:53.481493195Z" level=info msg="StopPodSandbox for \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\" returns successfully"
Jan 14 13:09:53.486624 containerd[1726]: time="2025-01-14T13:09:53.486091062Z" level=info msg="StopPodSandbox for \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\""
Jan 14 13:09:53.486624 containerd[1726]: time="2025-01-14T13:09:53.486593869Z" level=info msg="TearDown network for sandbox \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\" successfully"
Jan 14 13:09:53.486624 containerd[1726]: time="2025-01-14T13:09:53.486607869Z" level=info msg="StopPodSandbox for \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\" returns successfully"
Jan 14 13:09:53.487222 systemd[1]: run-netns-cni\x2d7f6c317b\x2d42c2\x2db43a\x2dd178\x2d3f3f947718c8.mount: Deactivated successfully.
Jan 14 13:09:53.490240 containerd[1726]: time="2025-01-14T13:09:53.490124320Z" level=info msg="StopPodSandbox for \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\""
Jan 14 13:09:53.490240 containerd[1726]: time="2025-01-14T13:09:53.490229422Z" level=info msg="TearDown network for sandbox \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\" successfully"
Jan 14 13:09:53.490400 containerd[1726]: time="2025-01-14T13:09:53.490242622Z" level=info msg="StopPodSandbox for \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\" returns successfully"
Jan 14 13:09:53.490779 containerd[1726]: time="2025-01-14T13:09:53.490746929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cjcc6,Uid:7f807593-f91e-4011-a174-603d407a7151,Namespace:calico-system,Attempt:5,}"
Jan 14 13:09:53.638268 containerd[1726]: time="2025-01-14T13:09:53.638051361Z" level=error msg="Failed to destroy network for sandbox \"77b65cdc291308e003fcb16a01ed058c5bce3521eab41750e19707de3a459dcb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:53.641284 containerd[1726]: time="2025-01-14T13:09:53.641072705Z" level=error msg="encountered an error cleaning up failed sandbox \"77b65cdc291308e003fcb16a01ed058c5bce3521eab41750e19707de3a459dcb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:53.641284 containerd[1726]: time="2025-01-14T13:09:53.641170106Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cjcc6,Uid:7f807593-f91e-4011-a174-603d407a7151,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"77b65cdc291308e003fcb16a01ed058c5bce3521eab41750e19707de3a459dcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:53.643172 kubelet[2558]: E0114 13:09:53.641474    2558 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77b65cdc291308e003fcb16a01ed058c5bce3521eab41750e19707de3a459dcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:53.643172 kubelet[2558]: E0114 13:09:53.641538    2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77b65cdc291308e003fcb16a01ed058c5bce3521eab41750e19707de3a459dcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cjcc6"
Jan 14 13:09:53.643172 kubelet[2558]: E0114 13:09:53.641564    2558 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77b65cdc291308e003fcb16a01ed058c5bce3521eab41750e19707de3a459dcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cjcc6"
Jan 14 13:09:53.642048 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-77b65cdc291308e003fcb16a01ed058c5bce3521eab41750e19707de3a459dcb-shm.mount: Deactivated successfully.
Jan 14 13:09:53.644049 kubelet[2558]: E0114 13:09:53.641612    2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cjcc6_calico-system(7f807593-f91e-4011-a174-603d407a7151)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cjcc6_calico-system(7f807593-f91e-4011-a174-603d407a7151)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77b65cdc291308e003fcb16a01ed058c5bce3521eab41750e19707de3a459dcb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cjcc6" podUID="7f807593-f91e-4011-a174-603d407a7151"
Jan 14 13:09:54.347409 kubelet[2558]: E0114 13:09:54.347358    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:09:54.390319 kubelet[2558]: I0114 13:09:54.389141    2558 topology_manager.go:215] "Topology Admit Handler" podUID="60ed0fb3-d0a4-4860-9f5b-3c5bcf47b776" podNamespace="default" podName="nginx-deployment-85f456d6dd-kjn5z"
Jan 14 13:09:54.398532 systemd[1]: Created slice kubepods-besteffort-pod60ed0fb3_d0a4_4860_9f5b_3c5bcf47b776.slice - libcontainer container kubepods-besteffort-pod60ed0fb3_d0a4_4860_9f5b_3c5bcf47b776.slice.
Jan 14 13:09:54.486506 kubelet[2558]: I0114 13:09:54.486468    2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77b65cdc291308e003fcb16a01ed058c5bce3521eab41750e19707de3a459dcb"
Jan 14 13:09:54.487125 kubelet[2558]: I0114 13:09:54.487097    2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj7x4\" (UniqueName: \"kubernetes.io/projected/60ed0fb3-d0a4-4860-9f5b-3c5bcf47b776-kube-api-access-sj7x4\") pod \"nginx-deployment-85f456d6dd-kjn5z\" (UID: \"60ed0fb3-d0a4-4860-9f5b-3c5bcf47b776\") " pod="default/nginx-deployment-85f456d6dd-kjn5z"
Jan 14 13:09:54.487394 containerd[1726]: time="2025-01-14T13:09:54.487359750Z" level=info msg="StopPodSandbox for \"77b65cdc291308e003fcb16a01ed058c5bce3521eab41750e19707de3a459dcb\""
Jan 14 13:09:54.488440 containerd[1726]: time="2025-01-14T13:09:54.488142662Z" level=info msg="Ensure that sandbox 77b65cdc291308e003fcb16a01ed058c5bce3521eab41750e19707de3a459dcb in task-service has been cleanup successfully"
Jan 14 13:09:54.488440 containerd[1726]: time="2025-01-14T13:09:54.488346365Z" level=info msg="TearDown network for sandbox \"77b65cdc291308e003fcb16a01ed058c5bce3521eab41750e19707de3a459dcb\" successfully"
Jan 14 13:09:54.488440 containerd[1726]: time="2025-01-14T13:09:54.488366265Z" level=info msg="StopPodSandbox for \"77b65cdc291308e003fcb16a01ed058c5bce3521eab41750e19707de3a459dcb\" returns successfully"
Jan 14 13:09:54.488844 containerd[1726]: time="2025-01-14T13:09:54.488695670Z" level=info msg="StopPodSandbox for \"4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561\""
Jan 14 13:09:54.488844 containerd[1726]: time="2025-01-14T13:09:54.488780071Z" level=info msg="TearDown network for sandbox \"4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561\" successfully"
Jan 14 13:09:54.488844 containerd[1726]: time="2025-01-14T13:09:54.488793571Z" level=info msg="StopPodSandbox for \"4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561\" returns successfully"
Jan 14 13:09:54.492928 containerd[1726]: time="2025-01-14T13:09:54.492900531Z" level=info msg="StopPodSandbox for \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\""
Jan 14 13:09:54.493148 containerd[1726]: time="2025-01-14T13:09:54.492996532Z" level=info msg="TearDown network for sandbox \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\" successfully"
Jan 14 13:09:54.493148 containerd[1726]: time="2025-01-14T13:09:54.493011932Z" level=info msg="StopPodSandbox for \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\" returns successfully"
Jan 14 13:09:54.493881 containerd[1726]: time="2025-01-14T13:09:54.493812344Z" level=info msg="StopPodSandbox for \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\""
Jan 14 13:09:54.494190 containerd[1726]: time="2025-01-14T13:09:54.494072148Z" level=info msg="TearDown network for sandbox \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\" successfully"
Jan 14 13:09:54.494190 containerd[1726]: time="2025-01-14T13:09:54.494091648Z" level=info msg="StopPodSandbox for \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\" returns successfully"
Jan 14 13:09:54.494641 systemd[1]: run-netns-cni\x2d53c7dec6\x2ddeeb\x2d4a35\x2d0e7d\x2d7551d530f643.mount: Deactivated successfully.
Jan 14 13:09:54.496904 containerd[1726]: time="2025-01-14T13:09:54.496699786Z" level=info msg="StopPodSandbox for \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\""
Jan 14 13:09:54.496904 containerd[1726]: time="2025-01-14T13:09:54.496798887Z" level=info msg="TearDown network for sandbox \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\" successfully"
Jan 14 13:09:54.496904 containerd[1726]: time="2025-01-14T13:09:54.496851888Z" level=info msg="StopPodSandbox for \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\" returns successfully"
Jan 14 13:09:54.497759 containerd[1726]: time="2025-01-14T13:09:54.497731001Z" level=info msg="StopPodSandbox for \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\""
Jan 14 13:09:54.498053 containerd[1726]: time="2025-01-14T13:09:54.497834102Z" level=info msg="TearDown network for sandbox \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\" successfully"
Jan 14 13:09:54.498053 containerd[1726]: time="2025-01-14T13:09:54.497872603Z" level=info msg="StopPodSandbox for \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\" returns successfully"
Jan 14 13:09:54.498397 containerd[1726]: time="2025-01-14T13:09:54.498373210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cjcc6,Uid:7f807593-f91e-4011-a174-603d407a7151,Namespace:calico-system,Attempt:6,}"
Jan 14 13:09:54.670789 containerd[1726]: time="2025-01-14T13:09:54.670660603Z" level=error msg="Failed to destroy network for sandbox \"ebed3c39c35fb775381f8e002af205cd708166e492d62407704fca384391d613\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:54.672778 containerd[1726]: time="2025-01-14T13:09:54.672568330Z" level=error msg="encountered an error cleaning up failed sandbox \"ebed3c39c35fb775381f8e002af205cd708166e492d62407704fca384391d613\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:54.672778 containerd[1726]: time="2025-01-14T13:09:54.672661532Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cjcc6,Uid:7f807593-f91e-4011-a174-603d407a7151,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"ebed3c39c35fb775381f8e002af205cd708166e492d62407704fca384391d613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:54.674485 kubelet[2558]: E0114 13:09:54.672943    2558 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebed3c39c35fb775381f8e002af205cd708166e492d62407704fca384391d613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:54.674485 kubelet[2558]: E0114 13:09:54.674426    2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebed3c39c35fb775381f8e002af205cd708166e492d62407704fca384391d613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cjcc6"
Jan 14 13:09:54.674740 kubelet[2558]: E0114 13:09:54.674514    2558 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebed3c39c35fb775381f8e002af205cd708166e492d62407704fca384391d613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cjcc6"
Jan 14 13:09:54.676117 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ebed3c39c35fb775381f8e002af205cd708166e492d62407704fca384391d613-shm.mount: Deactivated successfully.
Jan 14 13:09:54.676582 kubelet[2558]: E0114 13:09:54.676217    2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cjcc6_calico-system(7f807593-f91e-4011-a174-603d407a7151)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cjcc6_calico-system(7f807593-f91e-4011-a174-603d407a7151)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ebed3c39c35fb775381f8e002af205cd708166e492d62407704fca384391d613\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cjcc6" podUID="7f807593-f91e-4011-a174-603d407a7151"
Jan 14 13:09:54.703468 containerd[1726]: time="2025-01-14T13:09:54.703418007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kjn5z,Uid:60ed0fb3-d0a4-4860-9f5b-3c5bcf47b776,Namespace:default,Attempt:0,}"
Jan 14 13:09:54.861060 containerd[1726]: time="2025-01-14T13:09:54.860917585Z" level=error msg="Failed to destroy network for sandbox \"c20c1ead3e903cbf46eea4155a8d361af26c08f34a3850f9ee72ad640600efe5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:54.861664 containerd[1726]: time="2025-01-14T13:09:54.861459792Z" level=error msg="encountered an error cleaning up failed sandbox \"c20c1ead3e903cbf46eea4155a8d361af26c08f34a3850f9ee72ad640600efe5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:54.861664 containerd[1726]: time="2025-01-14T13:09:54.861548593Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kjn5z,Uid:60ed0fb3-d0a4-4860-9f5b-3c5bcf47b776,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c20c1ead3e903cbf46eea4155a8d361af26c08f34a3850f9ee72ad640600efe5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:54.862319 kubelet[2558]: E0114 13:09:54.861999    2558 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c20c1ead3e903cbf46eea4155a8d361af26c08f34a3850f9ee72ad640600efe5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:54.862319 kubelet[2558]: E0114 13:09:54.862079    2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c20c1ead3e903cbf46eea4155a8d361af26c08f34a3850f9ee72ad640600efe5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kjn5z"
Jan 14 13:09:54.862319 kubelet[2558]: E0114 13:09:54.862108    2558 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c20c1ead3e903cbf46eea4155a8d361af26c08f34a3850f9ee72ad640600efe5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kjn5z"
Jan 14 13:09:54.862561 kubelet[2558]: E0114 13:09:54.862162    2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-kjn5z_default(60ed0fb3-d0a4-4860-9f5b-3c5bcf47b776)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-kjn5z_default(60ed0fb3-d0a4-4860-9f5b-3c5bcf47b776)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c20c1ead3e903cbf46eea4155a8d361af26c08f34a3850f9ee72ad640600efe5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-kjn5z" podUID="60ed0fb3-d0a4-4860-9f5b-3c5bcf47b776"
Jan 14 13:09:55.348012 kubelet[2558]: E0114 13:09:55.347960    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:09:55.496327 kubelet[2558]: I0114 13:09:55.495153    2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebed3c39c35fb775381f8e002af205cd708166e492d62407704fca384391d613"
Jan 14 13:09:55.496452 containerd[1726]: time="2025-01-14T13:09:55.496030095Z" level=info msg="StopPodSandbox for \"ebed3c39c35fb775381f8e002af205cd708166e492d62407704fca384391d613\""
Jan 14 13:09:55.496452 containerd[1726]: time="2025-01-14T13:09:55.496258497Z" level=info msg="Ensure that sandbox ebed3c39c35fb775381f8e002af205cd708166e492d62407704fca384391d613 in task-service has been cleanup successfully"
Jan 14 13:09:55.497443 containerd[1726]: time="2025-01-14T13:09:55.497186305Z" level=info msg="TearDown network for sandbox \"ebed3c39c35fb775381f8e002af205cd708166e492d62407704fca384391d613\" successfully"
Jan 14 13:09:55.497443 containerd[1726]: time="2025-01-14T13:09:55.497208305Z" level=info msg="StopPodSandbox for \"ebed3c39c35fb775381f8e002af205cd708166e492d62407704fca384391d613\" returns successfully"
Jan 14 13:09:55.500388 containerd[1726]: time="2025-01-14T13:09:55.499517926Z" level=info msg="StopPodSandbox for \"77b65cdc291308e003fcb16a01ed058c5bce3521eab41750e19707de3a459dcb\""
Jan 14 13:09:55.500388 containerd[1726]: time="2025-01-14T13:09:55.499612527Z" level=info msg="TearDown network for sandbox \"77b65cdc291308e003fcb16a01ed058c5bce3521eab41750e19707de3a459dcb\" successfully"
Jan 14 13:09:55.500388 containerd[1726]: time="2025-01-14T13:09:55.499628627Z" level=info msg="StopPodSandbox for \"77b65cdc291308e003fcb16a01ed058c5bce3521eab41750e19707de3a459dcb\" returns successfully"
Jan 14 13:09:55.499964 systemd[1]: run-netns-cni\x2de2d2516a\x2d5645\x2d35b7\x2d1c7e\x2d6c58b0a3e837.mount: Deactivated successfully.
Jan 14 13:09:55.502882 containerd[1726]: time="2025-01-14T13:09:55.502400952Z" level=info msg="StopPodSandbox for \"4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561\""
Jan 14 13:09:55.502882 containerd[1726]: time="2025-01-14T13:09:55.502490853Z" level=info msg="TearDown network for sandbox \"4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561\" successfully"
Jan 14 13:09:55.502882 containerd[1726]: time="2025-01-14T13:09:55.502506353Z" level=info msg="StopPodSandbox for \"4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561\" returns successfully"
Jan 14 13:09:55.503038 kubelet[2558]: I0114 13:09:55.502675    2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c20c1ead3e903cbf46eea4155a8d361af26c08f34a3850f9ee72ad640600efe5"
Jan 14 13:09:55.504318 containerd[1726]: time="2025-01-14T13:09:55.503334661Z" level=info msg="StopPodSandbox for \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\""
Jan 14 13:09:55.504318 containerd[1726]: time="2025-01-14T13:09:55.503422462Z" level=info msg="TearDown network for sandbox \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\" successfully"
Jan 14 13:09:55.504318 containerd[1726]: time="2025-01-14T13:09:55.503436962Z" level=info msg="StopPodSandbox for \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\" returns successfully"
Jan 14 13:09:55.504318 containerd[1726]: time="2025-01-14T13:09:55.503456462Z" level=info msg="StopPodSandbox for \"c20c1ead3e903cbf46eea4155a8d361af26c08f34a3850f9ee72ad640600efe5\""
Jan 14 13:09:55.504318 containerd[1726]: time="2025-01-14T13:09:55.503646164Z" level=info msg="Ensure that sandbox c20c1ead3e903cbf46eea4155a8d361af26c08f34a3850f9ee72ad640600efe5 in task-service has been cleanup successfully"
Jan 14 13:09:55.504318 containerd[1726]: time="2025-01-14T13:09:55.503661464Z" level=info msg="StopPodSandbox for \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\""
Jan 14 13:09:55.504318 containerd[1726]: time="2025-01-14T13:09:55.503733965Z" level=info msg="TearDown network for sandbox \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\" successfully"
Jan 14 13:09:55.504318 containerd[1726]: time="2025-01-14T13:09:55.503747165Z" level=info msg="StopPodSandbox for \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\" returns successfully"
Jan 14 13:09:55.504318 containerd[1726]: time="2025-01-14T13:09:55.503987167Z" level=info msg="StopPodSandbox for \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\""
Jan 14 13:09:55.504318 containerd[1726]: time="2025-01-14T13:09:55.504074368Z" level=info msg="TearDown network for sandbox \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\" successfully"
Jan 14 13:09:55.504318 containerd[1726]: time="2025-01-14T13:09:55.504090068Z" level=info msg="StopPodSandbox for \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\" returns successfully"
Jan 14 13:09:55.506327 containerd[1726]: time="2025-01-14T13:09:55.506280988Z" level=info msg="StopPodSandbox for \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\""
Jan 14 13:09:55.506413 containerd[1726]: time="2025-01-14T13:09:55.506387789Z" level=info msg="TearDown network for sandbox \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\" successfully"
Jan 14 13:09:55.506413 containerd[1726]: time="2025-01-14T13:09:55.506402689Z" level=info msg="StopPodSandbox for \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\" returns successfully"
Jan 14 13:09:55.507742 containerd[1726]: time="2025-01-14T13:09:55.507238396Z" level=info msg="TearDown network for sandbox \"c20c1ead3e903cbf46eea4155a8d361af26c08f34a3850f9ee72ad640600efe5\" successfully"
Jan 14 13:09:55.507742 containerd[1726]: time="2025-01-14T13:09:55.507261797Z" level=info msg="StopPodSandbox for \"c20c1ead3e903cbf46eea4155a8d361af26c08f34a3850f9ee72ad640600efe5\" returns successfully"
Jan 14 13:09:55.507411 systemd[1]: run-netns-cni\x2dbe37e48b\x2d4aed\x2d01f4\x2d6b38\x2dee5abde76f41.mount: Deactivated successfully.
Jan 14 13:09:55.507934 containerd[1726]: time="2025-01-14T13:09:55.507786301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kjn5z,Uid:60ed0fb3-d0a4-4860-9f5b-3c5bcf47b776,Namespace:default,Attempt:1,}"
Jan 14 13:09:55.514172 containerd[1726]: time="2025-01-14T13:09:55.513927757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cjcc6,Uid:7f807593-f91e-4011-a174-603d407a7151,Namespace:calico-system,Attempt:7,}"
Jan 14 13:09:55.717286 containerd[1726]: time="2025-01-14T13:09:55.717058700Z" level=error msg="Failed to destroy network for sandbox \"bfa7299060acc581f22bc6e96b932e211514bd247b55c90ab5a52f7b86238686\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:55.719584 containerd[1726]: time="2025-01-14T13:09:55.719377821Z" level=error msg="encountered an error cleaning up failed sandbox \"bfa7299060acc581f22bc6e96b932e211514bd247b55c90ab5a52f7b86238686\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:55.719584 containerd[1726]: time="2025-01-14T13:09:55.719466322Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cjcc6,Uid:7f807593-f91e-4011-a174-603d407a7151,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"bfa7299060acc581f22bc6e96b932e211514bd247b55c90ab5a52f7b86238686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:55.719782 kubelet[2558]: E0114 13:09:55.719723    2558 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfa7299060acc581f22bc6e96b932e211514bd247b55c90ab5a52f7b86238686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:55.719846 kubelet[2558]: E0114 13:09:55.719798    2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfa7299060acc581f22bc6e96b932e211514bd247b55c90ab5a52f7b86238686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cjcc6"
Jan 14 13:09:55.719846 kubelet[2558]: E0114 13:09:55.719829    2558 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfa7299060acc581f22bc6e96b932e211514bd247b55c90ab5a52f7b86238686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cjcc6"
Jan 14 13:09:55.719945 kubelet[2558]: E0114 13:09:55.719879    2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cjcc6_calico-system(7f807593-f91e-4011-a174-603d407a7151)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cjcc6_calico-system(7f807593-f91e-4011-a174-603d407a7151)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bfa7299060acc581f22bc6e96b932e211514bd247b55c90ab5a52f7b86238686\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cjcc6" podUID="7f807593-f91e-4011-a174-603d407a7151"
Jan 14 13:09:55.734465 containerd[1726]: time="2025-01-14T13:09:55.734408357Z" level=error msg="Failed to destroy network for sandbox \"868b3f58f4c7a953fea960fd8f959411cfc904cc59c433b222a2358d5149c4f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:55.735146 containerd[1726]: time="2025-01-14T13:09:55.734911762Z" level=error msg="encountered an error cleaning up failed sandbox \"868b3f58f4c7a953fea960fd8f959411cfc904cc59c433b222a2358d5149c4f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:55.735146 containerd[1726]: time="2025-01-14T13:09:55.734994363Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kjn5z,Uid:60ed0fb3-d0a4-4860-9f5b-3c5bcf47b776,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"868b3f58f4c7a953fea960fd8f959411cfc904cc59c433b222a2358d5149c4f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:55.735407 kubelet[2558]: E0114 13:09:55.735246    2558 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"868b3f58f4c7a953fea960fd8f959411cfc904cc59c433b222a2358d5149c4f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:55.735407 kubelet[2558]: E0114 13:09:55.735323    2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"868b3f58f4c7a953fea960fd8f959411cfc904cc59c433b222a2358d5149c4f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kjn5z"
Jan 14 13:09:55.735407 kubelet[2558]: E0114 13:09:55.735351    2558 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"868b3f58f4c7a953fea960fd8f959411cfc904cc59c433b222a2358d5149c4f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kjn5z"
Jan 14 13:09:55.735586 kubelet[2558]: E0114 13:09:55.735408    2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-kjn5z_default(60ed0fb3-d0a4-4860-9f5b-3c5bcf47b776)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-kjn5z_default(60ed0fb3-d0a4-4860-9f5b-3c5bcf47b776)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"868b3f58f4c7a953fea960fd8f959411cfc904cc59c433b222a2358d5149c4f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-kjn5z" podUID="60ed0fb3-d0a4-4860-9f5b-3c5bcf47b776"
Jan 14 13:09:56.348604 kubelet[2558]: E0114 13:09:56.348178    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:09:56.495743 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bfa7299060acc581f22bc6e96b932e211514bd247b55c90ab5a52f7b86238686-shm.mount: Deactivated successfully.
Jan 14 13:09:56.511240 kubelet[2558]: I0114 13:09:56.511117    2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfa7299060acc581f22bc6e96b932e211514bd247b55c90ab5a52f7b86238686"
Jan 14 13:09:56.513061 containerd[1726]: time="2025-01-14T13:09:56.512593217Z" level=info msg="StopPodSandbox for \"bfa7299060acc581f22bc6e96b932e211514bd247b55c90ab5a52f7b86238686\""
Jan 14 13:09:56.513061 containerd[1726]: time="2025-01-14T13:09:56.512864319Z" level=info msg="Ensure that sandbox bfa7299060acc581f22bc6e96b932e211514bd247b55c90ab5a52f7b86238686 in task-service has been cleanup successfully"
Jan 14 13:09:56.513061 containerd[1726]: time="2025-01-14T13:09:56.513050721Z" level=info msg="TearDown network for sandbox \"bfa7299060acc581f22bc6e96b932e211514bd247b55c90ab5a52f7b86238686\" successfully"
Jan 14 13:09:56.513541 containerd[1726]: time="2025-01-14T13:09:56.513068821Z" level=info msg="StopPodSandbox for \"bfa7299060acc581f22bc6e96b932e211514bd247b55c90ab5a52f7b86238686\" returns successfully"
Jan 14 13:09:56.515853 systemd[1]: run-netns-cni\x2d909f3fbc\x2d2fff\x2d4eaf\x2dd686\x2dc7066337aa06.mount: Deactivated successfully.
Jan 14 13:09:56.520125 containerd[1726]: time="2025-01-14T13:09:56.520085685Z" level=info msg="StopPodSandbox for \"ebed3c39c35fb775381f8e002af205cd708166e492d62407704fca384391d613\""
Jan 14 13:09:56.520282 containerd[1726]: time="2025-01-14T13:09:56.520221986Z" level=info msg="TearDown network for sandbox \"ebed3c39c35fb775381f8e002af205cd708166e492d62407704fca384391d613\" successfully"
Jan 14 13:09:56.520365 containerd[1726]: time="2025-01-14T13:09:56.520279586Z" level=info msg="StopPodSandbox for \"ebed3c39c35fb775381f8e002af205cd708166e492d62407704fca384391d613\" returns successfully"
Jan 14 13:09:56.520995 containerd[1726]: time="2025-01-14T13:09:56.520962693Z" level=info msg="StopPodSandbox for \"77b65cdc291308e003fcb16a01ed058c5bce3521eab41750e19707de3a459dcb\""
Jan 14 13:09:56.521071 containerd[1726]: time="2025-01-14T13:09:56.521057194Z" level=info msg="TearDown network for sandbox \"77b65cdc291308e003fcb16a01ed058c5bce3521eab41750e19707de3a459dcb\" successfully"
Jan 14 13:09:56.521114 containerd[1726]: time="2025-01-14T13:09:56.521073194Z" level=info msg="StopPodSandbox for \"77b65cdc291308e003fcb16a01ed058c5bce3521eab41750e19707de3a459dcb\" returns successfully"
Jan 14 13:09:56.522321 containerd[1726]: time="2025-01-14T13:09:56.522277205Z" level=info msg="StopPodSandbox for \"4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561\""
Jan 14 13:09:56.525582 containerd[1726]: time="2025-01-14T13:09:56.522403006Z" level=info msg="TearDown network for sandbox \"4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561\" successfully"
Jan 14 13:09:56.525582 containerd[1726]: time="2025-01-14T13:09:56.522424006Z" level=info msg="StopPodSandbox for \"4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561\" returns successfully"
Jan 14 13:09:56.525736 kubelet[2558]: I0114 13:09:56.524810    2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="868b3f58f4c7a953fea960fd8f959411cfc904cc59c433b222a2358d5149c4f2"
Jan 14 13:09:56.525795 containerd[1726]: time="2025-01-14T13:09:56.525760136Z" level=info msg="StopPodSandbox for \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\""
Jan 14 13:09:56.525880 containerd[1726]: time="2025-01-14T13:09:56.525853937Z" level=info msg="TearDown network for sandbox \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\" successfully"
Jan 14 13:09:56.525941 containerd[1726]: time="2025-01-14T13:09:56.525879137Z" level=info msg="StopPodSandbox for \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\" returns successfully"
Jan 14 13:09:56.525994 containerd[1726]: time="2025-01-14T13:09:56.525975038Z" level=info msg="StopPodSandbox for \"868b3f58f4c7a953fea960fd8f959411cfc904cc59c433b222a2358d5149c4f2\""
Jan 14 13:09:56.526374 containerd[1726]: time="2025-01-14T13:09:56.526349042Z" level=info msg="Ensure that sandbox 868b3f58f4c7a953fea960fd8f959411cfc904cc59c433b222a2358d5149c4f2 in task-service has been cleanup successfully"
Jan 14 13:09:56.530406 systemd[1]: run-netns-cni\x2de2b3bceb\x2d5c24\x2d9b4f\x2d4c28\x2d72f2b5881ab4.mount: Deactivated successfully.
Jan 14 13:09:56.531348 containerd[1726]: time="2025-01-14T13:09:56.531269686Z" level=info msg="TearDown network for sandbox \"868b3f58f4c7a953fea960fd8f959411cfc904cc59c433b222a2358d5149c4f2\" successfully"
Jan 14 13:09:56.531348 containerd[1726]: time="2025-01-14T13:09:56.531337687Z" level=info msg="StopPodSandbox for \"868b3f58f4c7a953fea960fd8f959411cfc904cc59c433b222a2358d5149c4f2\" returns successfully"
Jan 14 13:09:56.532380 containerd[1726]: time="2025-01-14T13:09:56.531532389Z" level=info msg="StopPodSandbox for \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\""
Jan 14 13:09:56.532380 containerd[1726]: time="2025-01-14T13:09:56.531631189Z" level=info msg="TearDown network for sandbox \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\" successfully"
Jan 14 13:09:56.532380 containerd[1726]: time="2025-01-14T13:09:56.531644890Z" level=info msg="StopPodSandbox for \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\" returns successfully"
Jan 14 13:09:56.532380 containerd[1726]: time="2025-01-14T13:09:56.531775391Z" level=info msg="StopPodSandbox for \"c20c1ead3e903cbf46eea4155a8d361af26c08f34a3850f9ee72ad640600efe5\""
Jan 14 13:09:56.532380 containerd[1726]: time="2025-01-14T13:09:56.531850791Z" level=info msg="TearDown network for sandbox \"c20c1ead3e903cbf46eea4155a8d361af26c08f34a3850f9ee72ad640600efe5\" successfully"
Jan 14 13:09:56.532380 containerd[1726]: time="2025-01-14T13:09:56.531862892Z" level=info msg="StopPodSandbox for \"c20c1ead3e903cbf46eea4155a8d361af26c08f34a3850f9ee72ad640600efe5\" returns successfully"
Jan 14 13:09:56.532380 containerd[1726]: time="2025-01-14T13:09:56.532212195Z" level=info msg="StopPodSandbox for \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\""
Jan 14 13:09:56.532380 containerd[1726]: time="2025-01-14T13:09:56.532321596Z" level=info msg="TearDown network for sandbox \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\" successfully"
Jan 14 13:09:56.532380 containerd[1726]: time="2025-01-14T13:09:56.532337096Z" level=info msg="StopPodSandbox for \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\" returns successfully"
Jan 14 13:09:56.532739 containerd[1726]: time="2025-01-14T13:09:56.532533498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kjn5z,Uid:60ed0fb3-d0a4-4860-9f5b-3c5bcf47b776,Namespace:default,Attempt:2,}"
Jan 14 13:09:56.533660 containerd[1726]: time="2025-01-14T13:09:56.533633108Z" level=info msg="StopPodSandbox for \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\""
Jan 14 13:09:56.533759 containerd[1726]: time="2025-01-14T13:09:56.533737909Z" level=info msg="TearDown network for sandbox \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\" successfully"
Jan 14 13:09:56.533808 containerd[1726]: time="2025-01-14T13:09:56.533761009Z" level=info msg="StopPodSandbox for \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\" returns successfully"
Jan 14 13:09:56.534263 containerd[1726]: time="2025-01-14T13:09:56.534236313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cjcc6,Uid:7f807593-f91e-4011-a174-603d407a7151,Namespace:calico-system,Attempt:8,}"
Jan 14 13:09:56.591038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2587200237.mount: Deactivated successfully.
Jan 14 13:09:56.695043 containerd[1726]: time="2025-01-14T13:09:56.693783960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 14 13:09:56.697839 containerd[1726]: time="2025-01-14T13:09:56.697777197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010"
Jan 14 13:09:56.701941 containerd[1726]: time="2025-01-14T13:09:56.701896534Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 14 13:09:56.711603 containerd[1726]: time="2025-01-14T13:09:56.711555622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 14 13:09:56.714407 containerd[1726]: time="2025-01-14T13:09:56.714359847Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.250437287s"
Jan 14 13:09:56.714522 containerd[1726]: time="2025-01-14T13:09:56.714411448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\""
Jan 14 13:09:56.727832 containerd[1726]: time="2025-01-14T13:09:56.727782669Z" level=info msg="CreateContainer within sandbox \"166ca973e69328ed91648663e33e0757b65418652fe7f1c7ee60b21cb15eb62b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}"
Jan 14 13:09:56.744442 containerd[1726]: time="2025-01-14T13:09:56.744386620Z" level=error msg="Failed to destroy network for sandbox \"16bde7a5451fa22951a9b2ecdeb2565561222e8b99b40b7909fab4fffe3595a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:56.744984 containerd[1726]: time="2025-01-14T13:09:56.744944225Z" level=error msg="encountered an error cleaning up failed sandbox \"16bde7a5451fa22951a9b2ecdeb2565561222e8b99b40b7909fab4fffe3595a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:56.745206 containerd[1726]: time="2025-01-14T13:09:56.745178927Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kjn5z,Uid:60ed0fb3-d0a4-4860-9f5b-3c5bcf47b776,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"16bde7a5451fa22951a9b2ecdeb2565561222e8b99b40b7909fab4fffe3595a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:56.745740 kubelet[2558]: E0114 13:09:56.745693    2558 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16bde7a5451fa22951a9b2ecdeb2565561222e8b99b40b7909fab4fffe3595a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:56.745823 kubelet[2558]: E0114 13:09:56.745767    2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16bde7a5451fa22951a9b2ecdeb2565561222e8b99b40b7909fab4fffe3595a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kjn5z"
Jan 14 13:09:56.745823 kubelet[2558]: E0114 13:09:56.745795    2558 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16bde7a5451fa22951a9b2ecdeb2565561222e8b99b40b7909fab4fffe3595a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-kjn5z"
Jan 14 13:09:56.745924 kubelet[2558]: E0114 13:09:56.745848    2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-kjn5z_default(60ed0fb3-d0a4-4860-9f5b-3c5bcf47b776)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-kjn5z_default(60ed0fb3-d0a4-4860-9f5b-3c5bcf47b776)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16bde7a5451fa22951a9b2ecdeb2565561222e8b99b40b7909fab4fffe3595a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-kjn5z" podUID="60ed0fb3-d0a4-4860-9f5b-3c5bcf47b776"
Jan 14 13:09:56.759697 containerd[1726]: time="2025-01-14T13:09:56.759648258Z" level=error msg="Failed to destroy network for sandbox \"8452e7cf01205620c6671c449c5dbf573c177efecb9a2a97874802e988c48b15\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:56.760871 containerd[1726]: time="2025-01-14T13:09:56.760837769Z" level=error msg="encountered an error cleaning up failed sandbox \"8452e7cf01205620c6671c449c5dbf573c177efecb9a2a97874802e988c48b15\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:56.760969 containerd[1726]: time="2025-01-14T13:09:56.760919770Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cjcc6,Uid:7f807593-f91e-4011-a174-603d407a7151,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"8452e7cf01205620c6671c449c5dbf573c177efecb9a2a97874802e988c48b15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:56.761185 kubelet[2558]: E0114 13:09:56.761152    2558 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8452e7cf01205620c6671c449c5dbf573c177efecb9a2a97874802e988c48b15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 14 13:09:56.761299 kubelet[2558]: E0114 13:09:56.761209    2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8452e7cf01205620c6671c449c5dbf573c177efecb9a2a97874802e988c48b15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cjcc6"
Jan 14 13:09:56.761299 kubelet[2558]: E0114 13:09:56.761234    2558 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8452e7cf01205620c6671c449c5dbf573c177efecb9a2a97874802e988c48b15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cjcc6"
Jan 14 13:09:56.761578 kubelet[2558]: E0114 13:09:56.761532    2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cjcc6_calico-system(7f807593-f91e-4011-a174-603d407a7151)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cjcc6_calico-system(7f807593-f91e-4011-a174-603d407a7151)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8452e7cf01205620c6671c449c5dbf573c177efecb9a2a97874802e988c48b15\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cjcc6" podUID="7f807593-f91e-4011-a174-603d407a7151"
Jan 14 13:09:56.790741 containerd[1726]: time="2025-01-14T13:09:56.790686040Z" level=info msg="CreateContainer within sandbox \"166ca973e69328ed91648663e33e0757b65418652fe7f1c7ee60b21cb15eb62b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"38785d95ec11cbbda4d283a824778ce7e5ef9cc91337d1109d3da38d0e846f3a\""
Jan 14 13:09:56.791364 containerd[1726]: time="2025-01-14T13:09:56.791316045Z" level=info msg="StartContainer for \"38785d95ec11cbbda4d283a824778ce7e5ef9cc91337d1109d3da38d0e846f3a\""
Jan 14 13:09:56.819736 systemd[1]: Started cri-containerd-38785d95ec11cbbda4d283a824778ce7e5ef9cc91337d1109d3da38d0e846f3a.scope - libcontainer container 38785d95ec11cbbda4d283a824778ce7e5ef9cc91337d1109d3da38d0e846f3a.
Jan 14 13:09:56.854098 containerd[1726]: time="2025-01-14T13:09:56.854049214Z" level=info msg="StartContainer for \"38785d95ec11cbbda4d283a824778ce7e5ef9cc91337d1109d3da38d0e846f3a\" returns successfully"
Jan 14 13:09:57.029198 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information.
Jan 14 13:09:57.029373 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
Jan 14 13:09:57.348768 kubelet[2558]: E0114 13:09:57.348612    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:09:57.497095 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-16bde7a5451fa22951a9b2ecdeb2565561222e8b99b40b7909fab4fffe3595a7-shm.mount: Deactivated successfully.
Jan 14 13:09:57.538071 kubelet[2558]: I0114 13:09:57.537338    2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8452e7cf01205620c6671c449c5dbf573c177efecb9a2a97874802e988c48b15"
Jan 14 13:09:57.538239 containerd[1726]: time="2025-01-14T13:09:57.538200821Z" level=info msg="StopPodSandbox for \"8452e7cf01205620c6671c449c5dbf573c177efecb9a2a97874802e988c48b15\""
Jan 14 13:09:57.538645 containerd[1726]: time="2025-01-14T13:09:57.538482223Z" level=info msg="Ensure that sandbox 8452e7cf01205620c6671c449c5dbf573c177efecb9a2a97874802e988c48b15 in task-service has been cleanup successfully"
Jan 14 13:09:57.538827 containerd[1726]: time="2025-01-14T13:09:57.538736126Z" level=info msg="TearDown network for sandbox \"8452e7cf01205620c6671c449c5dbf573c177efecb9a2a97874802e988c48b15\" successfully"
Jan 14 13:09:57.538906 containerd[1726]: time="2025-01-14T13:09:57.538825627Z" level=info msg="StopPodSandbox for \"8452e7cf01205620c6671c449c5dbf573c177efecb9a2a97874802e988c48b15\" returns successfully"
Jan 14 13:09:57.541310 containerd[1726]: time="2025-01-14T13:09:57.539152530Z" level=info msg="StopPodSandbox for \"bfa7299060acc581f22bc6e96b932e211514bd247b55c90ab5a52f7b86238686\""
Jan 14 13:09:57.541310 containerd[1726]: time="2025-01-14T13:09:57.539245230Z" level=info msg="TearDown network for sandbox \"bfa7299060acc581f22bc6e96b932e211514bd247b55c90ab5a52f7b86238686\" successfully"
Jan 14 13:09:57.541310 containerd[1726]: time="2025-01-14T13:09:57.539258631Z" level=info msg="StopPodSandbox for \"bfa7299060acc581f22bc6e96b932e211514bd247b55c90ab5a52f7b86238686\" returns successfully"
Jan 14 13:09:57.542342 containerd[1726]: time="2025-01-14T13:09:57.541514651Z" level=info msg="StopPodSandbox for \"ebed3c39c35fb775381f8e002af205cd708166e492d62407704fca384391d613\""
Jan 14 13:09:57.542342 containerd[1726]: time="2025-01-14T13:09:57.542157657Z" level=info msg="TearDown network for sandbox \"ebed3c39c35fb775381f8e002af205cd708166e492d62407704fca384391d613\" successfully"
Jan 14 13:09:57.542342 containerd[1726]: time="2025-01-14T13:09:57.542175957Z" level=info msg="StopPodSandbox for \"ebed3c39c35fb775381f8e002af205cd708166e492d62407704fca384391d613\" returns successfully"
Jan 14 13:09:57.542829 systemd[1]: run-netns-cni\x2dc2b49b63\x2d664c\x2d39b4\x2d4ada\x2d02b0415c7c72.mount: Deactivated successfully.
Jan 14 13:09:57.547003 kubelet[2558]: I0114 13:09:57.546251    2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16bde7a5451fa22951a9b2ecdeb2565561222e8b99b40b7909fab4fffe3595a7"
Jan 14 13:09:57.547312 containerd[1726]: time="2025-01-14T13:09:57.547268003Z" level=info msg="StopPodSandbox for \"77b65cdc291308e003fcb16a01ed058c5bce3521eab41750e19707de3a459dcb\""
Jan 14 13:09:57.547730 containerd[1726]: time="2025-01-14T13:09:57.547666007Z" level=info msg="TearDown network for sandbox \"77b65cdc291308e003fcb16a01ed058c5bce3521eab41750e19707de3a459dcb\" successfully"
Jan 14 13:09:57.547730 containerd[1726]: time="2025-01-14T13:09:57.547687207Z" level=info msg="StopPodSandbox for \"77b65cdc291308e003fcb16a01ed058c5bce3521eab41750e19707de3a459dcb\" returns successfully"
Jan 14 13:09:57.547989 containerd[1726]: time="2025-01-14T13:09:57.547453405Z" level=info msg="StopPodSandbox for \"16bde7a5451fa22951a9b2ecdeb2565561222e8b99b40b7909fab4fffe3595a7\""
Jan 14 13:09:57.548590 containerd[1726]: time="2025-01-14T13:09:57.548448814Z" level=info msg="StopPodSandbox for \"4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561\""
Jan 14 13:09:57.548590 containerd[1726]: time="2025-01-14T13:09:57.548534415Z" level=info msg="TearDown network for sandbox \"4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561\" successfully"
Jan 14 13:09:57.548590 containerd[1726]: time="2025-01-14T13:09:57.548548315Z" level=info msg="StopPodSandbox for \"4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561\" returns successfully"
Jan 14 13:09:57.549347 containerd[1726]: time="2025-01-14T13:09:57.549322122Z" level=info msg="StopPodSandbox for \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\""
Jan 14 13:09:57.549640 containerd[1726]: time="2025-01-14T13:09:57.549619224Z" level=info msg="TearDown network for sandbox \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\" successfully"
Jan 14 13:09:57.549762 containerd[1726]: time="2025-01-14T13:09:57.549740526Z" level=info msg="StopPodSandbox for \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\" returns successfully"
Jan 14 13:09:57.549970 containerd[1726]: time="2025-01-14T13:09:57.549672925Z" level=info msg="Ensure that sandbox 16bde7a5451fa22951a9b2ecdeb2565561222e8b99b40b7909fab4fffe3595a7 in task-service has been cleanup successfully"
Jan 14 13:09:57.553416 containerd[1726]: time="2025-01-14T13:09:57.552956555Z" level=info msg="TearDown network for sandbox \"16bde7a5451fa22951a9b2ecdeb2565561222e8b99b40b7909fab4fffe3595a7\" successfully"
Jan 14 13:09:57.553416 containerd[1726]: time="2025-01-14T13:09:57.552983155Z" level=info msg="StopPodSandbox for \"16bde7a5451fa22951a9b2ecdeb2565561222e8b99b40b7909fab4fffe3595a7\" returns successfully"
Jan 14 13:09:57.553416 containerd[1726]: time="2025-01-14T13:09:57.553011755Z" level=info msg="StopPodSandbox for \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\""
Jan 14 13:09:57.553416 containerd[1726]: time="2025-01-14T13:09:57.553114256Z" level=info msg="TearDown network for sandbox \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\" successfully"
Jan 14 13:09:57.553416 containerd[1726]: time="2025-01-14T13:09:57.553127056Z" level=info msg="StopPodSandbox for \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\" returns successfully"
Jan 14 13:09:57.554173 systemd[1]: run-netns-cni\x2d034585e8\x2de1c8\x2d5441\x2dc3ea\x2d698231bcfea4.mount: Deactivated successfully.
Jan 14 13:09:57.557344 containerd[1726]: time="2025-01-14T13:09:57.554438268Z" level=info msg="StopPodSandbox for \"868b3f58f4c7a953fea960fd8f959411cfc904cc59c433b222a2358d5149c4f2\""
Jan 14 13:09:57.557344 containerd[1726]: time="2025-01-14T13:09:57.554553569Z" level=info msg="TearDown network for sandbox \"868b3f58f4c7a953fea960fd8f959411cfc904cc59c433b222a2358d5149c4f2\" successfully"
Jan 14 13:09:57.557344 containerd[1726]: time="2025-01-14T13:09:57.554568069Z" level=info msg="StopPodSandbox for \"868b3f58f4c7a953fea960fd8f959411cfc904cc59c433b222a2358d5149c4f2\" returns successfully"
Jan 14 13:09:57.566087 systemd[1]: run-containerd-runc-k8s.io-38785d95ec11cbbda4d283a824778ce7e5ef9cc91337d1109d3da38d0e846f3a-runc.NmS08X.mount: Deactivated successfully.
Jan 14 13:09:57.568020 containerd[1726]: time="2025-01-14T13:09:57.567919791Z" level=info msg="StopPodSandbox for \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\""
Jan 14 13:09:57.568127 containerd[1726]: time="2025-01-14T13:09:57.567982691Z" level=info msg="StopPodSandbox for \"c20c1ead3e903cbf46eea4155a8d361af26c08f34a3850f9ee72ad640600efe5\""
Jan 14 13:09:57.568482 containerd[1726]: time="2025-01-14T13:09:57.568361495Z" level=info msg="TearDown network for sandbox \"c20c1ead3e903cbf46eea4155a8d361af26c08f34a3850f9ee72ad640600efe5\" successfully"
Jan 14 13:09:57.568549 containerd[1726]: time="2025-01-14T13:09:57.568481796Z" level=info msg="StopPodSandbox for \"c20c1ead3e903cbf46eea4155a8d361af26c08f34a3850f9ee72ad640600efe5\" returns successfully"
Jan 14 13:09:57.568969 containerd[1726]: time="2025-01-14T13:09:57.568847699Z" level=info msg="TearDown network for sandbox \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\" successfully"
Jan 14 13:09:57.569039 containerd[1726]: time="2025-01-14T13:09:57.568968300Z" level=info msg="StopPodSandbox for \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\" returns successfully"
Jan 14 13:09:57.570700 containerd[1726]: time="2025-01-14T13:09:57.570676716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kjn5z,Uid:60ed0fb3-d0a4-4860-9f5b-3c5bcf47b776,Namespace:default,Attempt:3,}"
Jan 14 13:09:57.572832 containerd[1726]: time="2025-01-14T13:09:57.572381331Z" level=info msg="StopPodSandbox for \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\""
Jan 14 13:09:57.572832 containerd[1726]: time="2025-01-14T13:09:57.572619333Z" level=info msg="TearDown network for sandbox \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\" successfully"
Jan 14 13:09:57.572832 containerd[1726]: time="2025-01-14T13:09:57.572632633Z" level=info msg="StopPodSandbox for \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\" returns successfully"
Jan 14 13:09:57.573149 containerd[1726]: time="2025-01-14T13:09:57.573118938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cjcc6,Uid:7f807593-f91e-4011-a174-603d407a7151,Namespace:calico-system,Attempt:9,}"
Jan 14 13:09:57.801562 systemd-networkd[1336]: cali1ce94b3f30f: Link UP
Jan 14 13:09:57.804722 systemd-networkd[1336]: cali1ce94b3f30f: Gained carrier
Jan 14 13:09:57.818457 containerd[1726]: 2025-01-14 13:09:57.684 [INFO][3527] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Jan 14 13:09:57.818457 containerd[1726]: 2025-01-14 13:09:57.699 [INFO][3527] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.19-k8s-csi--node--driver--cjcc6-eth0 csi-node-driver- calico-system  7f807593-f91e-4011-a174-603d407a7151 1218 0 2025-01-14 13:09:32 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s  10.200.8.19  csi-node-driver-cjcc6 eth0 csi-node-driver [] []   [kns.calico-system ksa.calico-system.csi-node-driver] cali1ce94b3f30f  [] []}} ContainerID="983af37f993ba2c865c4679dda3310e9891904ff6dfee4112160b610359d74ae" Namespace="calico-system" Pod="csi-node-driver-cjcc6" WorkloadEndpoint="10.200.8.19-k8s-csi--node--driver--cjcc6-"
Jan 14 13:09:57.818457 containerd[1726]: 2025-01-14 13:09:57.699 [INFO][3527] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="983af37f993ba2c865c4679dda3310e9891904ff6dfee4112160b610359d74ae" Namespace="calico-system" Pod="csi-node-driver-cjcc6" WorkloadEndpoint="10.200.8.19-k8s-csi--node--driver--cjcc6-eth0"
Jan 14 13:09:57.818457 containerd[1726]: 2025-01-14 13:09:57.744 [INFO][3550] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="983af37f993ba2c865c4679dda3310e9891904ff6dfee4112160b610359d74ae" HandleID="k8s-pod-network.983af37f993ba2c865c4679dda3310e9891904ff6dfee4112160b610359d74ae" Workload="10.200.8.19-k8s-csi--node--driver--cjcc6-eth0"
Jan 14 13:09:57.818457 containerd[1726]: 2025-01-14 13:09:57.756 [INFO][3550] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="983af37f993ba2c865c4679dda3310e9891904ff6dfee4112160b610359d74ae" HandleID="k8s-pod-network.983af37f993ba2c865c4679dda3310e9891904ff6dfee4112160b610359d74ae" Workload="10.200.8.19-k8s-csi--node--driver--cjcc6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00040c130), Attrs:map[string]string{"namespace":"calico-system", "node":"10.200.8.19", "pod":"csi-node-driver-cjcc6", "timestamp":"2025-01-14 13:09:57.74427109 +0000 UTC"}, Hostname:"10.200.8.19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Jan 14 13:09:57.818457 containerd[1726]: 2025-01-14 13:09:57.756 [INFO][3550] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Jan 14 13:09:57.818457 containerd[1726]: 2025-01-14 13:09:57.756 [INFO][3550] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Jan 14 13:09:57.818457 containerd[1726]: 2025-01-14 13:09:57.756 [INFO][3550] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.19'
Jan 14 13:09:57.818457 containerd[1726]: 2025-01-14 13:09:57.757 [INFO][3550] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.983af37f993ba2c865c4679dda3310e9891904ff6dfee4112160b610359d74ae" host="10.200.8.19"
Jan 14 13:09:57.818457 containerd[1726]: 2025-01-14 13:09:57.760 [INFO][3550] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.8.19"
Jan 14 13:09:57.818457 containerd[1726]: 2025-01-14 13:09:57.764 [INFO][3550] ipam/ipam.go 489: Trying affinity for 192.168.41.128/26 host="10.200.8.19"
Jan 14 13:09:57.818457 containerd[1726]: 2025-01-14 13:09:57.765 [INFO][3550] ipam/ipam.go 155: Attempting to load block cidr=192.168.41.128/26 host="10.200.8.19"
Jan 14 13:09:57.818457 containerd[1726]: 2025-01-14 13:09:57.767 [INFO][3550] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.41.128/26 host="10.200.8.19"
Jan 14 13:09:57.818457 containerd[1726]: 2025-01-14 13:09:57.767 [INFO][3550] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.41.128/26 handle="k8s-pod-network.983af37f993ba2c865c4679dda3310e9891904ff6dfee4112160b610359d74ae" host="10.200.8.19"
Jan 14 13:09:57.818457 containerd[1726]: 2025-01-14 13:09:57.768 [INFO][3550] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.983af37f993ba2c865c4679dda3310e9891904ff6dfee4112160b610359d74ae
Jan 14 13:09:57.818457 containerd[1726]: 2025-01-14 13:09:57.772 [INFO][3550] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.41.128/26 handle="k8s-pod-network.983af37f993ba2c865c4679dda3310e9891904ff6dfee4112160b610359d74ae" host="10.200.8.19"
Jan 14 13:09:57.818457 containerd[1726]: 2025-01-14 13:09:57.780 [INFO][3550] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.41.129/26] block=192.168.41.128/26 handle="k8s-pod-network.983af37f993ba2c865c4679dda3310e9891904ff6dfee4112160b610359d74ae" host="10.200.8.19"
Jan 14 13:09:57.818457 containerd[1726]: 2025-01-14 13:09:57.780 [INFO][3550] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.41.129/26] handle="k8s-pod-network.983af37f993ba2c865c4679dda3310e9891904ff6dfee4112160b610359d74ae" host="10.200.8.19"
Jan 14 13:09:57.818457 containerd[1726]: 2025-01-14 13:09:57.780 [INFO][3550] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Jan 14 13:09:57.818457 containerd[1726]: 2025-01-14 13:09:57.780 [INFO][3550] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.129/26] IPv6=[] ContainerID="983af37f993ba2c865c4679dda3310e9891904ff6dfee4112160b610359d74ae" HandleID="k8s-pod-network.983af37f993ba2c865c4679dda3310e9891904ff6dfee4112160b610359d74ae" Workload="10.200.8.19-k8s-csi--node--driver--cjcc6-eth0"
Jan 14 13:09:57.819779 containerd[1726]: 2025-01-14 13:09:57.783 [INFO][3527] cni-plugin/k8s.go 386: Populated endpoint ContainerID="983af37f993ba2c865c4679dda3310e9891904ff6dfee4112160b610359d74ae" Namespace="calico-system" Pod="csi-node-driver-cjcc6" WorkloadEndpoint="10.200.8.19-k8s-csi--node--driver--cjcc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19-k8s-csi--node--driver--cjcc6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7f807593-f91e-4011-a174-603d407a7151", ResourceVersion:"1218", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 9, 32, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.19", ContainerID:"", Pod:"csi-node-driver-cjcc6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.41.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1ce94b3f30f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 14 13:09:57.819779 containerd[1726]: 2025-01-14 13:09:57.783 [INFO][3527] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.41.129/32] ContainerID="983af37f993ba2c865c4679dda3310e9891904ff6dfee4112160b610359d74ae" Namespace="calico-system" Pod="csi-node-driver-cjcc6" WorkloadEndpoint="10.200.8.19-k8s-csi--node--driver--cjcc6-eth0"
Jan 14 13:09:57.819779 containerd[1726]: 2025-01-14 13:09:57.783 [INFO][3527] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ce94b3f30f ContainerID="983af37f993ba2c865c4679dda3310e9891904ff6dfee4112160b610359d74ae" Namespace="calico-system" Pod="csi-node-driver-cjcc6" WorkloadEndpoint="10.200.8.19-k8s-csi--node--driver--cjcc6-eth0"
Jan 14 13:09:57.819779 containerd[1726]: 2025-01-14 13:09:57.804 [INFO][3527] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="983af37f993ba2c865c4679dda3310e9891904ff6dfee4112160b610359d74ae" Namespace="calico-system" Pod="csi-node-driver-cjcc6" WorkloadEndpoint="10.200.8.19-k8s-csi--node--driver--cjcc6-eth0"
Jan 14 13:09:57.819779 containerd[1726]: 2025-01-14 13:09:57.805 [INFO][3527] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="983af37f993ba2c865c4679dda3310e9891904ff6dfee4112160b610359d74ae" Namespace="calico-system" Pod="csi-node-driver-cjcc6" WorkloadEndpoint="10.200.8.19-k8s-csi--node--driver--cjcc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19-k8s-csi--node--driver--cjcc6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7f807593-f91e-4011-a174-603d407a7151", ResourceVersion:"1218", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 9, 32, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.19", ContainerID:"983af37f993ba2c865c4679dda3310e9891904ff6dfee4112160b610359d74ae", Pod:"csi-node-driver-cjcc6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.41.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1ce94b3f30f", MAC:"66:c7:65:6e:91:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 14 13:09:57.819779 containerd[1726]: 2025-01-14 13:09:57.815 [INFO][3527] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="983af37f993ba2c865c4679dda3310e9891904ff6dfee4112160b610359d74ae" Namespace="calico-system" Pod="csi-node-driver-cjcc6" WorkloadEndpoint="10.200.8.19-k8s-csi--node--driver--cjcc6-eth0"
Jan 14 13:09:57.820146 kubelet[2558]: I0114 13:09:57.818638    2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6bgmt" podStartSLOduration=5.7687171809999995 podStartE2EDuration="25.818612765s" podCreationTimestamp="2025-01-14 13:09:32 +0000 UTC" firstStartedPulling="2025-01-14 13:09:36.665860776 +0000 UTC m=+5.423796193" lastFinishedPulling="2025-01-14 13:09:56.71575626 +0000 UTC m=+25.473691777" observedRunningTime="2025-01-14 13:09:57.571521623 +0000 UTC m=+26.329457040" watchObservedRunningTime="2025-01-14 13:09:57.818612765 +0000 UTC m=+26.576548282"
Jan 14 13:09:57.821427 systemd-networkd[1336]: cali77bace387e2: Link UP
Jan 14 13:09:57.822407 systemd-networkd[1336]: cali77bace387e2: Gained carrier
Jan 14 13:09:57.832772 containerd[1726]: 2025-01-14 13:09:57.697 [INFO][3535] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Jan 14 13:09:57.832772 containerd[1726]: 2025-01-14 13:09:57.709 [INFO][3535] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.19-k8s-nginx--deployment--85f456d6dd--kjn5z-eth0 nginx-deployment-85f456d6dd- default  60ed0fb3-d0a4-4860-9f5b-3c5bcf47b776 1314 0 2025-01-14 13:09:54 +0000 UTC <nil> <nil> map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s  10.200.8.19  nginx-deployment-85f456d6dd-kjn5z eth0 default [] []   [kns.default ksa.default.default] cali77bace387e2  [] []}} ContainerID="107aa4cf1fcb1ad4cdd581e75fb140431114e62b63b5d05fe773c7afe77df9d0" Namespace="default" Pod="nginx-deployment-85f456d6dd-kjn5z" WorkloadEndpoint="10.200.8.19-k8s-nginx--deployment--85f456d6dd--kjn5z-"
Jan 14 13:09:57.832772 containerd[1726]: 2025-01-14 13:09:57.709 [INFO][3535] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="107aa4cf1fcb1ad4cdd581e75fb140431114e62b63b5d05fe773c7afe77df9d0" Namespace="default" Pod="nginx-deployment-85f456d6dd-kjn5z" WorkloadEndpoint="10.200.8.19-k8s-nginx--deployment--85f456d6dd--kjn5z-eth0"
Jan 14 13:09:57.832772 containerd[1726]: 2025-01-14 13:09:57.748 [INFO][3554] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="107aa4cf1fcb1ad4cdd581e75fb140431114e62b63b5d05fe773c7afe77df9d0" HandleID="k8s-pod-network.107aa4cf1fcb1ad4cdd581e75fb140431114e62b63b5d05fe773c7afe77df9d0" Workload="10.200.8.19-k8s-nginx--deployment--85f456d6dd--kjn5z-eth0"
Jan 14 13:09:57.832772 containerd[1726]: 2025-01-14 13:09:57.756 [INFO][3554] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="107aa4cf1fcb1ad4cdd581e75fb140431114e62b63b5d05fe773c7afe77df9d0" HandleID="k8s-pod-network.107aa4cf1fcb1ad4cdd581e75fb140431114e62b63b5d05fe773c7afe77df9d0" Workload="10.200.8.19-k8s-nginx--deployment--85f456d6dd--kjn5z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319ed0), Attrs:map[string]string{"namespace":"default", "node":"10.200.8.19", "pod":"nginx-deployment-85f456d6dd-kjn5z", "timestamp":"2025-01-14 13:09:57.748000624 +0000 UTC"}, Hostname:"10.200.8.19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Jan 14 13:09:57.832772 containerd[1726]: 2025-01-14 13:09:57.756 [INFO][3554] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Jan 14 13:09:57.832772 containerd[1726]: 2025-01-14 13:09:57.780 [INFO][3554] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Jan 14 13:09:57.832772 containerd[1726]: 2025-01-14 13:09:57.780 [INFO][3554] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.19'
Jan 14 13:09:57.832772 containerd[1726]: 2025-01-14 13:09:57.782 [INFO][3554] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.107aa4cf1fcb1ad4cdd581e75fb140431114e62b63b5d05fe773c7afe77df9d0" host="10.200.8.19"
Jan 14 13:09:57.832772 containerd[1726]: 2025-01-14 13:09:57.787 [INFO][3554] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.8.19"
Jan 14 13:09:57.832772 containerd[1726]: 2025-01-14 13:09:57.791 [INFO][3554] ipam/ipam.go 489: Trying affinity for 192.168.41.128/26 host="10.200.8.19"
Jan 14 13:09:57.832772 containerd[1726]: 2025-01-14 13:09:57.793 [INFO][3554] ipam/ipam.go 155: Attempting to load block cidr=192.168.41.128/26 host="10.200.8.19"
Jan 14 13:09:57.832772 containerd[1726]: 2025-01-14 13:09:57.794 [INFO][3554] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.41.128/26 host="10.200.8.19"
Jan 14 13:09:57.832772 containerd[1726]: 2025-01-14 13:09:57.794 [INFO][3554] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.41.128/26 handle="k8s-pod-network.107aa4cf1fcb1ad4cdd581e75fb140431114e62b63b5d05fe773c7afe77df9d0" host="10.200.8.19"
Jan 14 13:09:57.832772 containerd[1726]: 2025-01-14 13:09:57.796 [INFO][3554] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.107aa4cf1fcb1ad4cdd581e75fb140431114e62b63b5d05fe773c7afe77df9d0
Jan 14 13:09:57.832772 containerd[1726]: 2025-01-14 13:09:57.803 [INFO][3554] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.41.128/26 handle="k8s-pod-network.107aa4cf1fcb1ad4cdd581e75fb140431114e62b63b5d05fe773c7afe77df9d0" host="10.200.8.19"
Jan 14 13:09:57.832772 containerd[1726]: 2025-01-14 13:09:57.812 [INFO][3554] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.41.130/26] block=192.168.41.128/26 handle="k8s-pod-network.107aa4cf1fcb1ad4cdd581e75fb140431114e62b63b5d05fe773c7afe77df9d0" host="10.200.8.19"
Jan 14 13:09:57.832772 containerd[1726]: 2025-01-14 13:09:57.812 [INFO][3554] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.41.130/26] handle="k8s-pod-network.107aa4cf1fcb1ad4cdd581e75fb140431114e62b63b5d05fe773c7afe77df9d0" host="10.200.8.19"
Jan 14 13:09:57.832772 containerd[1726]: 2025-01-14 13:09:57.812 [INFO][3554] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Jan 14 13:09:57.832772 containerd[1726]: 2025-01-14 13:09:57.812 [INFO][3554] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.130/26] IPv6=[] ContainerID="107aa4cf1fcb1ad4cdd581e75fb140431114e62b63b5d05fe773c7afe77df9d0" HandleID="k8s-pod-network.107aa4cf1fcb1ad4cdd581e75fb140431114e62b63b5d05fe773c7afe77df9d0" Workload="10.200.8.19-k8s-nginx--deployment--85f456d6dd--kjn5z-eth0"
Jan 14 13:09:57.833710 containerd[1726]: 2025-01-14 13:09:57.815 [INFO][3535] cni-plugin/k8s.go 386: Populated endpoint ContainerID="107aa4cf1fcb1ad4cdd581e75fb140431114e62b63b5d05fe773c7afe77df9d0" Namespace="default" Pod="nginx-deployment-85f456d6dd-kjn5z" WorkloadEndpoint="10.200.8.19-k8s-nginx--deployment--85f456d6dd--kjn5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19-k8s-nginx--deployment--85f456d6dd--kjn5z-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"60ed0fb3-d0a4-4860-9f5b-3c5bcf47b776", ResourceVersion:"1314", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 9, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.19", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-kjn5z", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.41.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali77bace387e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 14 13:09:57.833710 containerd[1726]: 2025-01-14 13:09:57.815 [INFO][3535] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.41.130/32] ContainerID="107aa4cf1fcb1ad4cdd581e75fb140431114e62b63b5d05fe773c7afe77df9d0" Namespace="default" Pod="nginx-deployment-85f456d6dd-kjn5z" WorkloadEndpoint="10.200.8.19-k8s-nginx--deployment--85f456d6dd--kjn5z-eth0"
Jan 14 13:09:57.833710 containerd[1726]: 2025-01-14 13:09:57.815 [INFO][3535] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali77bace387e2 ContainerID="107aa4cf1fcb1ad4cdd581e75fb140431114e62b63b5d05fe773c7afe77df9d0" Namespace="default" Pod="nginx-deployment-85f456d6dd-kjn5z" WorkloadEndpoint="10.200.8.19-k8s-nginx--deployment--85f456d6dd--kjn5z-eth0"
Jan 14 13:09:57.833710 containerd[1726]: 2025-01-14 13:09:57.824 [INFO][3535] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="107aa4cf1fcb1ad4cdd581e75fb140431114e62b63b5d05fe773c7afe77df9d0" Namespace="default" Pod="nginx-deployment-85f456d6dd-kjn5z" WorkloadEndpoint="10.200.8.19-k8s-nginx--deployment--85f456d6dd--kjn5z-eth0"
Jan 14 13:09:57.833710 containerd[1726]: 2025-01-14 13:09:57.824 [INFO][3535] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="107aa4cf1fcb1ad4cdd581e75fb140431114e62b63b5d05fe773c7afe77df9d0" Namespace="default" Pod="nginx-deployment-85f456d6dd-kjn5z" WorkloadEndpoint="10.200.8.19-k8s-nginx--deployment--85f456d6dd--kjn5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19-k8s-nginx--deployment--85f456d6dd--kjn5z-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"60ed0fb3-d0a4-4860-9f5b-3c5bcf47b776", ResourceVersion:"1314", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 9, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.19", ContainerID:"107aa4cf1fcb1ad4cdd581e75fb140431114e62b63b5d05fe773c7afe77df9d0", Pod:"nginx-deployment-85f456d6dd-kjn5z", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.41.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali77bace387e2", MAC:"6a:76:c7:2b:ff:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 14 13:09:57.833710 containerd[1726]: 2025-01-14 13:09:57.831 [INFO][3535] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="107aa4cf1fcb1ad4cdd581e75fb140431114e62b63b5d05fe773c7afe77df9d0" Namespace="default" Pod="nginx-deployment-85f456d6dd-kjn5z" WorkloadEndpoint="10.200.8.19-k8s-nginx--deployment--85f456d6dd--kjn5z-eth0"
Jan 14 13:09:57.856699 containerd[1726]: time="2025-01-14T13:09:57.856415208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 14 13:09:57.856699 containerd[1726]: time="2025-01-14T13:09:57.856478008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 14 13:09:57.856699 containerd[1726]: time="2025-01-14T13:09:57.856499708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 14 13:09:57.856699 containerd[1726]: time="2025-01-14T13:09:57.856588409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 14 13:09:57.878515 systemd[1]: Started cri-containerd-983af37f993ba2c865c4679dda3310e9891904ff6dfee4112160b610359d74ae.scope - libcontainer container 983af37f993ba2c865c4679dda3310e9891904ff6dfee4112160b610359d74ae.
Jan 14 13:09:57.886258 containerd[1726]: time="2025-01-14T13:09:57.885959976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 14 13:09:57.886258 containerd[1726]: time="2025-01-14T13:09:57.886023376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 14 13:09:57.886258 containerd[1726]: time="2025-01-14T13:09:57.886045176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 14 13:09:57.886258 containerd[1726]: time="2025-01-14T13:09:57.886137777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 14 13:09:57.914667 systemd[1]: Started cri-containerd-107aa4cf1fcb1ad4cdd581e75fb140431114e62b63b5d05fe773c7afe77df9d0.scope - libcontainer container 107aa4cf1fcb1ad4cdd581e75fb140431114e62b63b5d05fe773c7afe77df9d0.
Jan 14 13:09:57.918493 containerd[1726]: time="2025-01-14T13:09:57.918450670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cjcc6,Uid:7f807593-f91e-4011-a174-603d407a7151,Namespace:calico-system,Attempt:9,} returns sandbox id \"983af37f993ba2c865c4679dda3310e9891904ff6dfee4112160b610359d74ae\""
Jan 14 13:09:57.921832 containerd[1726]: time="2025-01-14T13:09:57.921794701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\""
Jan 14 13:09:57.957149 containerd[1726]: time="2025-01-14T13:09:57.957095221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kjn5z,Uid:60ed0fb3-d0a4-4860-9f5b-3c5bcf47b776,Namespace:default,Attempt:3,} returns sandbox id \"107aa4cf1fcb1ad4cdd581e75fb140431114e62b63b5d05fe773c7afe77df9d0\""
Jan 14 13:09:58.348841 kubelet[2558]: E0114 13:09:58.348774    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:09:58.578390 kernel: bpftool[3790]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
Jan 14 13:09:58.586556 systemd[1]: run-containerd-runc-k8s.io-38785d95ec11cbbda4d283a824778ce7e5ef9cc91337d1109d3da38d0e846f3a-runc.bBJp7m.mount: Deactivated successfully.
Jan 14 13:09:58.890759 systemd-networkd[1336]: vxlan.calico: Link UP
Jan 14 13:09:58.890768 systemd-networkd[1336]: vxlan.calico: Gained carrier
Jan 14 13:09:59.001542 systemd-networkd[1336]: cali77bace387e2: Gained IPv6LL
Jan 14 13:09:59.350164 kubelet[2558]: E0114 13:09:59.349894    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:09:59.430945 containerd[1726]: time="2025-01-14T13:09:59.430886291Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 14 13:09:59.441338 containerd[1726]: time="2025-01-14T13:09:59.441258785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632"
Jan 14 13:09:59.448120 containerd[1726]: time="2025-01-14T13:09:59.448043847Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 14 13:09:59.455021 containerd[1726]: time="2025-01-14T13:09:59.454947309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 14 13:09:59.456050 containerd[1726]: time="2025-01-14T13:09:59.455570615Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.533707314s"
Jan 14 13:09:59.456050 containerd[1726]: time="2025-01-14T13:09:59.455611815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\""
Jan 14 13:09:59.456597 containerd[1726]: time="2025-01-14T13:09:59.456567424Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Jan 14 13:09:59.457861 containerd[1726]: time="2025-01-14T13:09:59.457832435Z" level=info msg="CreateContainer within sandbox \"983af37f993ba2c865c4679dda3310e9891904ff6dfee4112160b610359d74ae\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}"
Jan 14 13:09:59.516979 containerd[1726]: time="2025-01-14T13:09:59.516922072Z" level=info msg="CreateContainer within sandbox \"983af37f993ba2c865c4679dda3310e9891904ff6dfee4112160b610359d74ae\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"09c862832ff11a4cba3c97dac2d84f760742e6bcc4910d3b3d9935b66b6f7680\""
Jan 14 13:09:59.517591 containerd[1726]: time="2025-01-14T13:09:59.517557677Z" level=info msg="StartContainer for \"09c862832ff11a4cba3c97dac2d84f760742e6bcc4910d3b3d9935b66b6f7680\""
Jan 14 13:09:59.553476 systemd[1]: Started cri-containerd-09c862832ff11a4cba3c97dac2d84f760742e6bcc4910d3b3d9935b66b6f7680.scope - libcontainer container 09c862832ff11a4cba3c97dac2d84f760742e6bcc4910d3b3d9935b66b6f7680.
Jan 14 13:09:59.588798 containerd[1726]: time="2025-01-14T13:09:59.588754623Z" level=info msg="StartContainer for \"09c862832ff11a4cba3c97dac2d84f760742e6bcc4910d3b3d9935b66b6f7680\" returns successfully"
Jan 14 13:09:59.705566 systemd-networkd[1336]: cali1ce94b3f30f: Gained IPv6LL
Jan 14 13:10:00.025485 systemd-networkd[1336]: vxlan.calico: Gained IPv6LL
Jan 14 13:10:00.350249 kubelet[2558]: E0114 13:10:00.350083    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:01.350923 kubelet[2558]: E0114 13:10:01.350856    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:02.352123 kubelet[2558]: E0114 13:10:02.351867    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:02.982736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount489572018.mount: Deactivated successfully.
Jan 14 13:10:03.352970 kubelet[2558]: E0114 13:10:03.352909    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:04.353943 kubelet[2558]: E0114 13:10:04.353863    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:04.375111 containerd[1726]: time="2025-01-14T13:10:04.375054601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 14 13:10:04.379015 containerd[1726]: time="2025-01-14T13:10:04.378938335Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018"
Jan 14 13:10:04.384386 containerd[1726]: time="2025-01-14T13:10:04.384318682Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 14 13:10:04.391928 containerd[1726]: time="2025-01-14T13:10:04.391859648Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 14 13:10:04.392850 containerd[1726]: time="2025-01-14T13:10:04.392648554Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 4.93604573s"
Jan 14 13:10:04.392850 containerd[1726]: time="2025-01-14T13:10:04.392692155Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\""
Jan 14 13:10:04.394575 containerd[1726]: time="2025-01-14T13:10:04.394103267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\""
Jan 14 13:10:04.395808 containerd[1726]: time="2025-01-14T13:10:04.395598280Z" level=info msg="CreateContainer within sandbox \"107aa4cf1fcb1ad4cdd581e75fb140431114e62b63b5d05fe773c7afe77df9d0\" for container &ContainerMetadata{Name:nginx,Attempt:0,}"
Jan 14 13:10:04.472542 containerd[1726]: time="2025-01-14T13:10:04.472484451Z" level=info msg="CreateContainer within sandbox \"107aa4cf1fcb1ad4cdd581e75fb140431114e62b63b5d05fe773c7afe77df9d0\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"eff45d5698d9adb6f8345e6f4864570009ad605844c40fedd6502881afe52411\""
Jan 14 13:10:04.473246 containerd[1726]: time="2025-01-14T13:10:04.473125157Z" level=info msg="StartContainer for \"eff45d5698d9adb6f8345e6f4864570009ad605844c40fedd6502881afe52411\""
Jan 14 13:10:04.505477 systemd[1]: Started cri-containerd-eff45d5698d9adb6f8345e6f4864570009ad605844c40fedd6502881afe52411.scope - libcontainer container eff45d5698d9adb6f8345e6f4864570009ad605844c40fedd6502881afe52411.
Jan 14 13:10:04.540104 containerd[1726]: time="2025-01-14T13:10:04.540053941Z" level=info msg="StartContainer for \"eff45d5698d9adb6f8345e6f4864570009ad605844c40fedd6502881afe52411\" returns successfully"
Jan 14 13:10:04.600688 kubelet[2558]: I0114 13:10:04.600614    2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-kjn5z" podStartSLOduration=4.165088536 podStartE2EDuration="10.600594169s" podCreationTimestamp="2025-01-14 13:09:54 +0000 UTC" firstStartedPulling="2025-01-14 13:09:57.958427933 +0000 UTC m=+26.716363350" lastFinishedPulling="2025-01-14 13:10:04.393933566 +0000 UTC m=+33.151868983" observedRunningTime="2025-01-14 13:10:04.600480168 +0000 UTC m=+33.358415585" watchObservedRunningTime="2025-01-14 13:10:04.600594169 +0000 UTC m=+33.358529586"
Jan 14 13:10:05.354919 kubelet[2558]: E0114 13:10:05.354855    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:06.355179 kubelet[2558]: E0114 13:10:06.355132    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:06.517912 containerd[1726]: time="2025-01-14T13:10:06.517854600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 14 13:10:06.523393 containerd[1726]: time="2025-01-14T13:10:06.523325248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081"
Jan 14 13:10:06.537073 containerd[1726]: time="2025-01-14T13:10:06.536999867Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 14 13:10:06.547057 containerd[1726]: time="2025-01-14T13:10:06.546985254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 14 13:10:06.548048 containerd[1726]: time="2025-01-14T13:10:06.547634260Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.153494493s"
Jan 14 13:10:06.548048 containerd[1726]: time="2025-01-14T13:10:06.547675360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\""
Jan 14 13:10:06.549936 containerd[1726]: time="2025-01-14T13:10:06.549902380Z" level=info msg="CreateContainer within sandbox \"983af37f993ba2c865c4679dda3310e9891904ff6dfee4112160b610359d74ae\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}"
Jan 14 13:10:06.596325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount6612044.mount: Deactivated successfully.
Jan 14 13:10:06.614619 containerd[1726]: time="2025-01-14T13:10:06.614477643Z" level=info msg="CreateContainer within sandbox \"983af37f993ba2c865c4679dda3310e9891904ff6dfee4112160b610359d74ae\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f04ad45c79b5f4134ddd8de4062963f5ea1a36e16290519ee7ea43117595c229\""
Jan 14 13:10:06.615409 containerd[1726]: time="2025-01-14T13:10:06.615374651Z" level=info msg="StartContainer for \"f04ad45c79b5f4134ddd8de4062963f5ea1a36e16290519ee7ea43117595c229\""
Jan 14 13:10:06.656485 systemd[1]: Started cri-containerd-f04ad45c79b5f4134ddd8de4062963f5ea1a36e16290519ee7ea43117595c229.scope - libcontainer container f04ad45c79b5f4134ddd8de4062963f5ea1a36e16290519ee7ea43117595c229.
Jan 14 13:10:06.689361 containerd[1726]: time="2025-01-14T13:10:06.689180895Z" level=info msg="StartContainer for \"f04ad45c79b5f4134ddd8de4062963f5ea1a36e16290519ee7ea43117595c229\" returns successfully"
Jan 14 13:10:07.356284 kubelet[2558]: E0114 13:10:07.356151    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:07.459067 kubelet[2558]: I0114 13:10:07.459021    2558 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0
Jan 14 13:10:07.459067 kubelet[2558]: I0114 13:10:07.459059    2558 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock
Jan 14 13:10:08.357360 kubelet[2558]: E0114 13:10:08.357313    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:09.358045 kubelet[2558]: E0114 13:10:09.358007    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:10.358455 kubelet[2558]: E0114 13:10:10.358394    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:11.358746 kubelet[2558]: E0114 13:10:11.358676    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:12.329983 kubelet[2558]: E0114 13:10:12.329920    2558 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:12.359421 kubelet[2558]: E0114 13:10:12.359355    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:12.881639 kubelet[2558]: I0114 13:10:12.881571    2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cjcc6" podStartSLOduration=32.254073894 podStartE2EDuration="40.881549967s" podCreationTimestamp="2025-01-14 13:09:32 +0000 UTC" firstStartedPulling="2025-01-14 13:09:57.921139195 +0000 UTC m=+26.679074712" lastFinishedPulling="2025-01-14 13:10:06.548615368 +0000 UTC m=+35.306550785" observedRunningTime="2025-01-14 13:10:07.617440895 +0000 UTC m=+36.375376312" watchObservedRunningTime="2025-01-14 13:10:12.881549967 +0000 UTC m=+41.639485384"
Jan 14 13:10:12.881886 kubelet[2558]: I0114 13:10:12.881821    2558 topology_manager.go:215] "Topology Admit Handler" podUID="0180736d-5104-4242-8728-c139201345d1" podNamespace="default" podName="nfs-server-provisioner-0"
Jan 14 13:10:12.887524 systemd[1]: Created slice kubepods-besteffort-pod0180736d_5104_4242_8728_c139201345d1.slice - libcontainer container kubepods-besteffort-pod0180736d_5104_4242_8728_c139201345d1.slice.
Jan 14 13:10:12.900186 kubelet[2558]: I0114 13:10:12.900045    2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8tm2\" (UniqueName: \"kubernetes.io/projected/0180736d-5104-4242-8728-c139201345d1-kube-api-access-k8tm2\") pod \"nfs-server-provisioner-0\" (UID: \"0180736d-5104-4242-8728-c139201345d1\") " pod="default/nfs-server-provisioner-0"
Jan 14 13:10:12.900186 kubelet[2558]: I0114 13:10:12.900091    2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/0180736d-5104-4242-8728-c139201345d1-data\") pod \"nfs-server-provisioner-0\" (UID: \"0180736d-5104-4242-8728-c139201345d1\") " pod="default/nfs-server-provisioner-0"
Jan 14 13:10:13.191517 containerd[1726]: time="2025-01-14T13:10:13.191232847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0180736d-5104-4242-8728-c139201345d1,Namespace:default,Attempt:0,}"
Jan 14 13:10:13.359664 kubelet[2558]: E0114 13:10:13.359617    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:13.380404 systemd-networkd[1336]: cali60e51b789ff: Link UP
Jan 14 13:10:13.380600 systemd-networkd[1336]: cali60e51b789ff: Gained carrier
Jan 14 13:10:13.394396 containerd[1726]: 2025-01-14 13:10:13.307 [INFO][4071] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.19-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default  0180736d-5104-4242-8728-c139201345d1 1417 0 2025-01-14 13:10:12 +0000 UTC <nil> <nil> map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s  10.200.8.19  nfs-server-provisioner-0 eth0 nfs-server-provisioner [] []   [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff  [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="0bba80f5dc5104f6c605a279969da12959b0bc2474c657338e3d13e0a2313518" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.19-k8s-nfs--server--provisioner--0-"
Jan 14 13:10:13.394396 containerd[1726]: 2025-01-14 13:10:13.307 [INFO][4071] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0bba80f5dc5104f6c605a279969da12959b0bc2474c657338e3d13e0a2313518" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.19-k8s-nfs--server--provisioner--0-eth0"
Jan 14 13:10:13.394396 containerd[1726]: 2025-01-14 13:10:13.332 [INFO][4081] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0bba80f5dc5104f6c605a279969da12959b0bc2474c657338e3d13e0a2313518" HandleID="k8s-pod-network.0bba80f5dc5104f6c605a279969da12959b0bc2474c657338e3d13e0a2313518" Workload="10.200.8.19-k8s-nfs--server--provisioner--0-eth0"
Jan 14 13:10:13.394396 containerd[1726]: 2025-01-14 13:10:13.342 [INFO][4081] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0bba80f5dc5104f6c605a279969da12959b0bc2474c657338e3d13e0a2313518" HandleID="k8s-pod-network.0bba80f5dc5104f6c605a279969da12959b0bc2474c657338e3d13e0a2313518" Workload="10.200.8.19-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051850), Attrs:map[string]string{"namespace":"default", "node":"10.200.8.19", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-14 13:10:13.332520491 +0000 UTC"}, Hostname:"10.200.8.19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Jan 14 13:10:13.394396 containerd[1726]: 2025-01-14 13:10:13.342 [INFO][4081] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Jan 14 13:10:13.394396 containerd[1726]: 2025-01-14 13:10:13.342 [INFO][4081] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Jan 14 13:10:13.394396 containerd[1726]: 2025-01-14 13:10:13.342 [INFO][4081] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.19'
Jan 14 13:10:13.394396 containerd[1726]: 2025-01-14 13:10:13.344 [INFO][4081] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0bba80f5dc5104f6c605a279969da12959b0bc2474c657338e3d13e0a2313518" host="10.200.8.19"
Jan 14 13:10:13.394396 containerd[1726]: 2025-01-14 13:10:13.349 [INFO][4081] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.8.19"
Jan 14 13:10:13.394396 containerd[1726]: 2025-01-14 13:10:13.353 [INFO][4081] ipam/ipam.go 489: Trying affinity for 192.168.41.128/26 host="10.200.8.19"
Jan 14 13:10:13.394396 containerd[1726]: 2025-01-14 13:10:13.355 [INFO][4081] ipam/ipam.go 155: Attempting to load block cidr=192.168.41.128/26 host="10.200.8.19"
Jan 14 13:10:13.394396 containerd[1726]: 2025-01-14 13:10:13.356 [INFO][4081] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.41.128/26 host="10.200.8.19"
Jan 14 13:10:13.394396 containerd[1726]: 2025-01-14 13:10:13.356 [INFO][4081] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.41.128/26 handle="k8s-pod-network.0bba80f5dc5104f6c605a279969da12959b0bc2474c657338e3d13e0a2313518" host="10.200.8.19"
Jan 14 13:10:13.394396 containerd[1726]: 2025-01-14 13:10:13.358 [INFO][4081] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0bba80f5dc5104f6c605a279969da12959b0bc2474c657338e3d13e0a2313518
Jan 14 13:10:13.394396 containerd[1726]: 2025-01-14 13:10:13.369 [INFO][4081] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.41.128/26 handle="k8s-pod-network.0bba80f5dc5104f6c605a279969da12959b0bc2474c657338e3d13e0a2313518" host="10.200.8.19"
Jan 14 13:10:13.394396 containerd[1726]: 2025-01-14 13:10:13.375 [INFO][4081] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.41.131/26] block=192.168.41.128/26 handle="k8s-pod-network.0bba80f5dc5104f6c605a279969da12959b0bc2474c657338e3d13e0a2313518" host="10.200.8.19"
Jan 14 13:10:13.394396 containerd[1726]: 2025-01-14 13:10:13.375 [INFO][4081] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.41.131/26] handle="k8s-pod-network.0bba80f5dc5104f6c605a279969da12959b0bc2474c657338e3d13e0a2313518" host="10.200.8.19"
Jan 14 13:10:13.394396 containerd[1726]: 2025-01-14 13:10:13.375 [INFO][4081] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Jan 14 13:10:13.394396 containerd[1726]: 2025-01-14 13:10:13.375 [INFO][4081] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.131/26] IPv6=[] ContainerID="0bba80f5dc5104f6c605a279969da12959b0bc2474c657338e3d13e0a2313518" HandleID="k8s-pod-network.0bba80f5dc5104f6c605a279969da12959b0bc2474c657338e3d13e0a2313518" Workload="10.200.8.19-k8s-nfs--server--provisioner--0-eth0"
Jan 14 13:10:13.395283 containerd[1726]: 2025-01-14 13:10:13.376 [INFO][4071] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0bba80f5dc5104f6c605a279969da12959b0bc2474c657338e3d13e0a2313518" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.19-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"0180736d-5104-4242-8728-c139201345d1", ResourceVersion:"1417", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 10, 12, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.19", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.41.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 14 13:10:13.395283 containerd[1726]: 2025-01-14 13:10:13.377 [INFO][4071] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.41.131/32] ContainerID="0bba80f5dc5104f6c605a279969da12959b0bc2474c657338e3d13e0a2313518" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.19-k8s-nfs--server--provisioner--0-eth0"
Jan 14 13:10:13.395283 containerd[1726]: 2025-01-14 13:10:13.377 [INFO][4071] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="0bba80f5dc5104f6c605a279969da12959b0bc2474c657338e3d13e0a2313518" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.19-k8s-nfs--server--provisioner--0-eth0"
Jan 14 13:10:13.395283 containerd[1726]: 2025-01-14 13:10:13.379 [INFO][4071] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0bba80f5dc5104f6c605a279969da12959b0bc2474c657338e3d13e0a2313518" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.19-k8s-nfs--server--provisioner--0-eth0"
Jan 14 13:10:13.395597 containerd[1726]: 2025-01-14 13:10:13.379 [INFO][4071] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0bba80f5dc5104f6c605a279969da12959b0bc2474c657338e3d13e0a2313518" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.19-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"0180736d-5104-4242-8728-c139201345d1", ResourceVersion:"1417", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 10, 12, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.19", ContainerID:"0bba80f5dc5104f6c605a279969da12959b0bc2474c657338e3d13e0a2313518", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.41.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"b2:2b:ba:6b:d7:2e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 14 13:10:13.395597 containerd[1726]: 2025-01-14 13:10:13.393 [INFO][4071] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0bba80f5dc5104f6c605a279969da12959b0bc2474c657338e3d13e0a2313518" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.8.19-k8s-nfs--server--provisioner--0-eth0"
Jan 14 13:10:13.422904 containerd[1726]: time="2025-01-14T13:10:13.422809497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 14 13:10:13.422904 containerd[1726]: time="2025-01-14T13:10:13.422874998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 14 13:10:13.423184 containerd[1726]: time="2025-01-14T13:10:13.422895099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 14 13:10:13.423184 containerd[1726]: time="2025-01-14T13:10:13.422982500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 14 13:10:13.450463 systemd[1]: Started cri-containerd-0bba80f5dc5104f6c605a279969da12959b0bc2474c657338e3d13e0a2313518.scope - libcontainer container 0bba80f5dc5104f6c605a279969da12959b0bc2474c657338e3d13e0a2313518.
Jan 14 13:10:13.489694 containerd[1726]: time="2025-01-14T13:10:13.489644064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0180736d-5104-4242-8728-c139201345d1,Namespace:default,Attempt:0,} returns sandbox id \"0bba80f5dc5104f6c605a279969da12959b0bc2474c657338e3d13e0a2313518\""
Jan 14 13:10:13.492136 containerd[1726]: time="2025-01-14T13:10:13.492032699Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\""
Jan 14 13:10:14.360103 kubelet[2558]: E0114 13:10:14.360043    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:15.321887 systemd-networkd[1336]: cali60e51b789ff: Gained IPv6LL
Jan 14 13:10:15.360684 kubelet[2558]: E0114 13:10:15.360644    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:16.122381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2609258843.mount: Deactivated successfully.
Jan 14 13:10:16.361780 kubelet[2558]: E0114 13:10:16.361725    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:17.362668 kubelet[2558]: E0114 13:10:17.362618    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:18.363153 kubelet[2558]: E0114 13:10:18.363101    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:19.364077 kubelet[2558]: E0114 13:10:19.364024    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:20.364726 kubelet[2558]: E0114 13:10:20.364660    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:21.365214 kubelet[2558]: E0114 13:10:21.365159    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:22.366427 kubelet[2558]: E0114 13:10:22.366361    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:23.367334 kubelet[2558]: E0114 13:10:23.367241    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:23.860063 containerd[1726]: time="2025-01-14T13:10:23.859998982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 14 13:10:23.865882 containerd[1726]: time="2025-01-14T13:10:23.865818951Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414"
Jan 14 13:10:23.872090 containerd[1726]: time="2025-01-14T13:10:23.872005424Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 14 13:10:23.880996 containerd[1726]: time="2025-01-14T13:10:23.880914730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 14 13:10:23.882131 containerd[1726]: time="2025-01-14T13:10:23.881952342Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 10.389883543s"
Jan 14 13:10:23.882131 containerd[1726]: time="2025-01-14T13:10:23.882010743Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\""
Jan 14 13:10:23.885113 containerd[1726]: time="2025-01-14T13:10:23.885081979Z" level=info msg="CreateContainer within sandbox \"0bba80f5dc5104f6c605a279969da12959b0bc2474c657338e3d13e0a2313518\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}"
Jan 14 13:10:23.949442 containerd[1726]: time="2025-01-14T13:10:23.949386442Z" level=info msg="CreateContainer within sandbox \"0bba80f5dc5104f6c605a279969da12959b0bc2474c657338e3d13e0a2313518\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"300c1a5986ea80abf2946825fd2fb997723a93d197064ec58775b12220510d19\""
Jan 14 13:10:23.950151 containerd[1726]: time="2025-01-14T13:10:23.950117350Z" level=info msg="StartContainer for \"300c1a5986ea80abf2946825fd2fb997723a93d197064ec58775b12220510d19\""
Jan 14 13:10:23.979484 systemd[1]: run-containerd-runc-k8s.io-300c1a5986ea80abf2946825fd2fb997723a93d197064ec58775b12220510d19-runc.ylQwag.mount: Deactivated successfully.
Jan 14 13:10:23.986453 systemd[1]: Started cri-containerd-300c1a5986ea80abf2946825fd2fb997723a93d197064ec58775b12220510d19.scope - libcontainer container 300c1a5986ea80abf2946825fd2fb997723a93d197064ec58775b12220510d19.
Jan 14 13:10:24.017103 containerd[1726]: time="2025-01-14T13:10:24.017046444Z" level=info msg="StartContainer for \"300c1a5986ea80abf2946825fd2fb997723a93d197064ec58775b12220510d19\" returns successfully"
Jan 14 13:10:24.368060 kubelet[2558]: E0114 13:10:24.367989    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:25.368510 kubelet[2558]: E0114 13:10:25.368434    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:26.368995 kubelet[2558]: E0114 13:10:26.368928    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:27.369841 kubelet[2558]: E0114 13:10:27.369770    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:28.370810 kubelet[2558]: E0114 13:10:28.370748    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:29.371285 kubelet[2558]: E0114 13:10:29.371214    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:30.371868 kubelet[2558]: E0114 13:10:30.371791    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:31.373018 kubelet[2558]: E0114 13:10:31.372957    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:32.329842 kubelet[2558]: E0114 13:10:32.329787    2558 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:32.350622 containerd[1726]: time="2025-01-14T13:10:32.350578712Z" level=info msg="StopPodSandbox for \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\""
Jan 14 13:10:32.351241 containerd[1726]: time="2025-01-14T13:10:32.350705213Z" level=info msg="TearDown network for sandbox \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\" successfully"
Jan 14 13:10:32.351241 containerd[1726]: time="2025-01-14T13:10:32.350721514Z" level=info msg="StopPodSandbox for \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\" returns successfully"
Jan 14 13:10:32.351241 containerd[1726]: time="2025-01-14T13:10:32.351181119Z" level=info msg="RemovePodSandbox for \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\""
Jan 14 13:10:32.351241 containerd[1726]: time="2025-01-14T13:10:32.351214719Z" level=info msg="Forcibly stopping sandbox \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\""
Jan 14 13:10:32.351441 containerd[1726]: time="2025-01-14T13:10:32.351325220Z" level=info msg="TearDown network for sandbox \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\" successfully"
Jan 14 13:10:32.364683 containerd[1726]: time="2025-01-14T13:10:32.364618769Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 14 13:10:32.364868 containerd[1726]: time="2025-01-14T13:10:32.364700169Z" level=info msg="RemovePodSandbox \"90f4052d2de35cc14aad93a8f96e607744e5d58b73a6bfdcf570f7c479d8a374\" returns successfully"
Jan 14 13:10:32.365334 containerd[1726]: time="2025-01-14T13:10:32.365280776Z" level=info msg="StopPodSandbox for \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\""
Jan 14 13:10:32.365446 containerd[1726]: time="2025-01-14T13:10:32.365420778Z" level=info msg="TearDown network for sandbox \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\" successfully"
Jan 14 13:10:32.365446 containerd[1726]: time="2025-01-14T13:10:32.365440778Z" level=info msg="StopPodSandbox for \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\" returns successfully"
Jan 14 13:10:32.365806 containerd[1726]: time="2025-01-14T13:10:32.365776381Z" level=info msg="RemovePodSandbox for \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\""
Jan 14 13:10:32.365904 containerd[1726]: time="2025-01-14T13:10:32.365808982Z" level=info msg="Forcibly stopping sandbox \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\""
Jan 14 13:10:32.365957 containerd[1726]: time="2025-01-14T13:10:32.365894283Z" level=info msg="TearDown network for sandbox \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\" successfully"
Jan 14 13:10:32.373096 kubelet[2558]: E0114 13:10:32.373045    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:32.376972 containerd[1726]: time="2025-01-14T13:10:32.376912406Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 14 13:10:32.377097 containerd[1726]: time="2025-01-14T13:10:32.376983106Z" level=info msg="RemovePodSandbox \"1627ffd791ec3fa6e83c37f9fbbe2db1c8de40c7b9f1fb64c4b81dcdd6f74dad\" returns successfully"
Jan 14 13:10:32.377524 containerd[1726]: time="2025-01-14T13:10:32.377496112Z" level=info msg="StopPodSandbox for \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\""
Jan 14 13:10:32.377641 containerd[1726]: time="2025-01-14T13:10:32.377611813Z" level=info msg="TearDown network for sandbox \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\" successfully"
Jan 14 13:10:32.377641 containerd[1726]: time="2025-01-14T13:10:32.377627714Z" level=info msg="StopPodSandbox for \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\" returns successfully"
Jan 14 13:10:32.378106 containerd[1726]: time="2025-01-14T13:10:32.377979718Z" level=info msg="RemovePodSandbox for \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\""
Jan 14 13:10:32.378106 containerd[1726]: time="2025-01-14T13:10:32.378009718Z" level=info msg="Forcibly stopping sandbox \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\""
Jan 14 13:10:32.378262 containerd[1726]: time="2025-01-14T13:10:32.378087019Z" level=info msg="TearDown network for sandbox \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\" successfully"
Jan 14 13:10:32.400365 containerd[1726]: time="2025-01-14T13:10:32.400260966Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 14 13:10:32.400365 containerd[1726]: time="2025-01-14T13:10:32.400373967Z" level=info msg="RemovePodSandbox \"842dc5490f47666ce2322f01b1f3305ab156eea0a589f979bf0c10b560784a24\" returns successfully"
Jan 14 13:10:32.401080 containerd[1726]: time="2025-01-14T13:10:32.401034975Z" level=info msg="StopPodSandbox for \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\""
Jan 14 13:10:32.401221 containerd[1726]: time="2025-01-14T13:10:32.401198076Z" level=info msg="TearDown network for sandbox \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\" successfully"
Jan 14 13:10:32.401284 containerd[1726]: time="2025-01-14T13:10:32.401224777Z" level=info msg="StopPodSandbox for \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\" returns successfully"
Jan 14 13:10:32.401767 containerd[1726]: time="2025-01-14T13:10:32.401703782Z" level=info msg="RemovePodSandbox for \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\""
Jan 14 13:10:32.401767 containerd[1726]: time="2025-01-14T13:10:32.401747182Z" level=info msg="Forcibly stopping sandbox \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\""
Jan 14 13:10:32.401965 containerd[1726]: time="2025-01-14T13:10:32.401846084Z" level=info msg="TearDown network for sandbox \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\" successfully"
Jan 14 13:10:32.416531 containerd[1726]: time="2025-01-14T13:10:32.416482347Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 14 13:10:32.416738 containerd[1726]: time="2025-01-14T13:10:32.416555548Z" level=info msg="RemovePodSandbox \"cc64ff6e1a03657f7b4aa569ee9a8be110b16a6513c505b737f21138084b4bb6\" returns successfully"
Jan 14 13:10:32.417002 containerd[1726]: time="2025-01-14T13:10:32.416924652Z" level=info msg="StopPodSandbox for \"4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561\""
Jan 14 13:10:32.417132 containerd[1726]: time="2025-01-14T13:10:32.417043653Z" level=info msg="TearDown network for sandbox \"4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561\" successfully"
Jan 14 13:10:32.417132 containerd[1726]: time="2025-01-14T13:10:32.417100054Z" level=info msg="StopPodSandbox for \"4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561\" returns successfully"
Jan 14 13:10:32.417487 containerd[1726]: time="2025-01-14T13:10:32.417434357Z" level=info msg="RemovePodSandbox for \"4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561\""
Jan 14 13:10:32.417554 containerd[1726]: time="2025-01-14T13:10:32.417487958Z" level=info msg="Forcibly stopping sandbox \"4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561\""
Jan 14 13:10:32.417643 containerd[1726]: time="2025-01-14T13:10:32.417589159Z" level=info msg="TearDown network for sandbox \"4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561\" successfully"
Jan 14 13:10:32.441713 containerd[1726]: time="2025-01-14T13:10:32.441637727Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 14 13:10:32.441861 containerd[1726]: time="2025-01-14T13:10:32.441717128Z" level=info msg="RemovePodSandbox \"4da220c9eeb891bccf8a66c21dd72d6dab5f1ac0505ff94139aedd4cd5a68561\" returns successfully"
Jan 14 13:10:32.442541 containerd[1726]: time="2025-01-14T13:10:32.442256234Z" level=info msg="StopPodSandbox for \"77b65cdc291308e003fcb16a01ed058c5bce3521eab41750e19707de3a459dcb\""
Jan 14 13:10:32.442541 containerd[1726]: time="2025-01-14T13:10:32.442412636Z" level=info msg="TearDown network for sandbox \"77b65cdc291308e003fcb16a01ed058c5bce3521eab41750e19707de3a459dcb\" successfully"
Jan 14 13:10:32.442541 containerd[1726]: time="2025-01-14T13:10:32.442431736Z" level=info msg="StopPodSandbox for \"77b65cdc291308e003fcb16a01ed058c5bce3521eab41750e19707de3a459dcb\" returns successfully"
Jan 14 13:10:32.444260 containerd[1726]: time="2025-01-14T13:10:32.443066843Z" level=info msg="RemovePodSandbox for \"77b65cdc291308e003fcb16a01ed058c5bce3521eab41750e19707de3a459dcb\""
Jan 14 13:10:32.444260 containerd[1726]: time="2025-01-14T13:10:32.443098943Z" level=info msg="Forcibly stopping sandbox \"77b65cdc291308e003fcb16a01ed058c5bce3521eab41750e19707de3a459dcb\""
Jan 14 13:10:32.444260 containerd[1726]: time="2025-01-14T13:10:32.443176444Z" level=info msg="TearDown network for sandbox \"77b65cdc291308e003fcb16a01ed058c5bce3521eab41750e19707de3a459dcb\" successfully"
Jan 14 13:10:32.470859 containerd[1726]: time="2025-01-14T13:10:32.470761852Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"77b65cdc291308e003fcb16a01ed058c5bce3521eab41750e19707de3a459dcb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 14 13:10:32.470859 containerd[1726]: time="2025-01-14T13:10:32.470839753Z" level=info msg="RemovePodSandbox \"77b65cdc291308e003fcb16a01ed058c5bce3521eab41750e19707de3a459dcb\" returns successfully"
Jan 14 13:10:32.471382 containerd[1726]: time="2025-01-14T13:10:32.471349058Z" level=info msg="StopPodSandbox for \"ebed3c39c35fb775381f8e002af205cd708166e492d62407704fca384391d613\""
Jan 14 13:10:32.471492 containerd[1726]: time="2025-01-14T13:10:32.471463660Z" level=info msg="TearDown network for sandbox \"ebed3c39c35fb775381f8e002af205cd708166e492d62407704fca384391d613\" successfully"
Jan 14 13:10:32.471492 containerd[1726]: time="2025-01-14T13:10:32.471481360Z" level=info msg="StopPodSandbox for \"ebed3c39c35fb775381f8e002af205cd708166e492d62407704fca384391d613\" returns successfully"
Jan 14 13:10:32.471940 containerd[1726]: time="2025-01-14T13:10:32.471890764Z" level=info msg="RemovePodSandbox for \"ebed3c39c35fb775381f8e002af205cd708166e492d62407704fca384391d613\""
Jan 14 13:10:32.471940 containerd[1726]: time="2025-01-14T13:10:32.471920865Z" level=info msg="Forcibly stopping sandbox \"ebed3c39c35fb775381f8e002af205cd708166e492d62407704fca384391d613\""
Jan 14 13:10:32.472068 containerd[1726]: time="2025-01-14T13:10:32.472014566Z" level=info msg="TearDown network for sandbox \"ebed3c39c35fb775381f8e002af205cd708166e492d62407704fca384391d613\" successfully"
Jan 14 13:10:32.489214 containerd[1726]: time="2025-01-14T13:10:32.489146157Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ebed3c39c35fb775381f8e002af205cd708166e492d62407704fca384391d613\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 14 13:10:32.489214 containerd[1726]: time="2025-01-14T13:10:32.489220458Z" level=info msg="RemovePodSandbox \"ebed3c39c35fb775381f8e002af205cd708166e492d62407704fca384391d613\" returns successfully"
Jan 14 13:10:32.489784 containerd[1726]: time="2025-01-14T13:10:32.489755464Z" level=info msg="StopPodSandbox for \"bfa7299060acc581f22bc6e96b932e211514bd247b55c90ab5a52f7b86238686\""
Jan 14 13:10:32.489915 containerd[1726]: time="2025-01-14T13:10:32.489866765Z" level=info msg="TearDown network for sandbox \"bfa7299060acc581f22bc6e96b932e211514bd247b55c90ab5a52f7b86238686\" successfully"
Jan 14 13:10:32.489915 containerd[1726]: time="2025-01-14T13:10:32.489885365Z" level=info msg="StopPodSandbox for \"bfa7299060acc581f22bc6e96b932e211514bd247b55c90ab5a52f7b86238686\" returns successfully"
Jan 14 13:10:32.490245 containerd[1726]: time="2025-01-14T13:10:32.490156168Z" level=info msg="RemovePodSandbox for \"bfa7299060acc581f22bc6e96b932e211514bd247b55c90ab5a52f7b86238686\""
Jan 14 13:10:32.490245 containerd[1726]: time="2025-01-14T13:10:32.490184068Z" level=info msg="Forcibly stopping sandbox \"bfa7299060acc581f22bc6e96b932e211514bd247b55c90ab5a52f7b86238686\""
Jan 14 13:10:32.490423 containerd[1726]: time="2025-01-14T13:10:32.490260869Z" level=info msg="TearDown network for sandbox \"bfa7299060acc581f22bc6e96b932e211514bd247b55c90ab5a52f7b86238686\" successfully"
Jan 14 13:10:32.508156 containerd[1726]: time="2025-01-14T13:10:32.507966967Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bfa7299060acc581f22bc6e96b932e211514bd247b55c90ab5a52f7b86238686\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 14 13:10:32.508156 containerd[1726]: time="2025-01-14T13:10:32.508038967Z" level=info msg="RemovePodSandbox \"bfa7299060acc581f22bc6e96b932e211514bd247b55c90ab5a52f7b86238686\" returns successfully"
Jan 14 13:10:32.508840 containerd[1726]: time="2025-01-14T13:10:32.508648274Z" level=info msg="StopPodSandbox for \"8452e7cf01205620c6671c449c5dbf573c177efecb9a2a97874802e988c48b15\""
Jan 14 13:10:32.508840 containerd[1726]: time="2025-01-14T13:10:32.508766576Z" level=info msg="TearDown network for sandbox \"8452e7cf01205620c6671c449c5dbf573c177efecb9a2a97874802e988c48b15\" successfully"
Jan 14 13:10:32.508840 containerd[1726]: time="2025-01-14T13:10:32.508777976Z" level=info msg="StopPodSandbox for \"8452e7cf01205620c6671c449c5dbf573c177efecb9a2a97874802e988c48b15\" returns successfully"
Jan 14 13:10:32.510210 containerd[1726]: time="2025-01-14T13:10:32.509192480Z" level=info msg="RemovePodSandbox for \"8452e7cf01205620c6671c449c5dbf573c177efecb9a2a97874802e988c48b15\""
Jan 14 13:10:32.510210 containerd[1726]: time="2025-01-14T13:10:32.509220481Z" level=info msg="Forcibly stopping sandbox \"8452e7cf01205620c6671c449c5dbf573c177efecb9a2a97874802e988c48b15\""
Jan 14 13:10:32.510210 containerd[1726]: time="2025-01-14T13:10:32.509304882Z" level=info msg="TearDown network for sandbox \"8452e7cf01205620c6671c449c5dbf573c177efecb9a2a97874802e988c48b15\" successfully"
Jan 14 13:10:32.533483 containerd[1726]: time="2025-01-14T13:10:32.533430550Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8452e7cf01205620c6671c449c5dbf573c177efecb9a2a97874802e988c48b15\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 14 13:10:32.533483 containerd[1726]: time="2025-01-14T13:10:32.533511351Z" level=info msg="RemovePodSandbox \"8452e7cf01205620c6671c449c5dbf573c177efecb9a2a97874802e988c48b15\" returns successfully"
Jan 14 13:10:32.534057 containerd[1726]: time="2025-01-14T13:10:32.533997057Z" level=info msg="StopPodSandbox for \"c20c1ead3e903cbf46eea4155a8d361af26c08f34a3850f9ee72ad640600efe5\""
Jan 14 13:10:32.534178 containerd[1726]: time="2025-01-14T13:10:32.534115758Z" level=info msg="TearDown network for sandbox \"c20c1ead3e903cbf46eea4155a8d361af26c08f34a3850f9ee72ad640600efe5\" successfully"
Jan 14 13:10:32.534178 containerd[1726]: time="2025-01-14T13:10:32.534136958Z" level=info msg="StopPodSandbox for \"c20c1ead3e903cbf46eea4155a8d361af26c08f34a3850f9ee72ad640600efe5\" returns successfully"
Jan 14 13:10:32.534542 containerd[1726]: time="2025-01-14T13:10:32.534511763Z" level=info msg="RemovePodSandbox for \"c20c1ead3e903cbf46eea4155a8d361af26c08f34a3850f9ee72ad640600efe5\""
Jan 14 13:10:32.534632 containerd[1726]: time="2025-01-14T13:10:32.534545963Z" level=info msg="Forcibly stopping sandbox \"c20c1ead3e903cbf46eea4155a8d361af26c08f34a3850f9ee72ad640600efe5\""
Jan 14 13:10:32.534678 containerd[1726]: time="2025-01-14T13:10:32.534630264Z" level=info msg="TearDown network for sandbox \"c20c1ead3e903cbf46eea4155a8d361af26c08f34a3850f9ee72ad640600efe5\" successfully"
Jan 14 13:10:32.549228 containerd[1726]: time="2025-01-14T13:10:32.549147626Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c20c1ead3e903cbf46eea4155a8d361af26c08f34a3850f9ee72ad640600efe5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 14 13:10:32.549388 containerd[1726]: time="2025-01-14T13:10:32.549246727Z" level=info msg="RemovePodSandbox \"c20c1ead3e903cbf46eea4155a8d361af26c08f34a3850f9ee72ad640600efe5\" returns successfully"
Jan 14 13:10:32.549794 containerd[1726]: time="2025-01-14T13:10:32.549754532Z" level=info msg="StopPodSandbox for \"868b3f58f4c7a953fea960fd8f959411cfc904cc59c433b222a2358d5149c4f2\""
Jan 14 13:10:32.549889 containerd[1726]: time="2025-01-14T13:10:32.549865034Z" level=info msg="TearDown network for sandbox \"868b3f58f4c7a953fea960fd8f959411cfc904cc59c433b222a2358d5149c4f2\" successfully"
Jan 14 13:10:32.549889 containerd[1726]: time="2025-01-14T13:10:32.549880434Z" level=info msg="StopPodSandbox for \"868b3f58f4c7a953fea960fd8f959411cfc904cc59c433b222a2358d5149c4f2\" returns successfully"
Jan 14 13:10:32.550210 containerd[1726]: time="2025-01-14T13:10:32.550172337Z" level=info msg="RemovePodSandbox for \"868b3f58f4c7a953fea960fd8f959411cfc904cc59c433b222a2358d5149c4f2\""
Jan 14 13:10:32.550210 containerd[1726]: time="2025-01-14T13:10:32.550202337Z" level=info msg="Forcibly stopping sandbox \"868b3f58f4c7a953fea960fd8f959411cfc904cc59c433b222a2358d5149c4f2\""
Jan 14 13:10:32.550367 containerd[1726]: time="2025-01-14T13:10:32.550279638Z" level=info msg="TearDown network for sandbox \"868b3f58f4c7a953fea960fd8f959411cfc904cc59c433b222a2358d5149c4f2\" successfully"
Jan 14 13:10:32.562195 containerd[1726]: time="2025-01-14T13:10:32.562151571Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"868b3f58f4c7a953fea960fd8f959411cfc904cc59c433b222a2358d5149c4f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 14 13:10:32.562515 containerd[1726]: time="2025-01-14T13:10:32.562217471Z" level=info msg="RemovePodSandbox \"868b3f58f4c7a953fea960fd8f959411cfc904cc59c433b222a2358d5149c4f2\" returns successfully"
Jan 14 13:10:32.562779 containerd[1726]: time="2025-01-14T13:10:32.562726977Z" level=info msg="StopPodSandbox for \"16bde7a5451fa22951a9b2ecdeb2565561222e8b99b40b7909fab4fffe3595a7\""
Jan 14 13:10:32.562854 containerd[1726]: time="2025-01-14T13:10:32.562841178Z" level=info msg="TearDown network for sandbox \"16bde7a5451fa22951a9b2ecdeb2565561222e8b99b40b7909fab4fffe3595a7\" successfully"
Jan 14 13:10:32.562897 containerd[1726]: time="2025-01-14T13:10:32.562857278Z" level=info msg="StopPodSandbox for \"16bde7a5451fa22951a9b2ecdeb2565561222e8b99b40b7909fab4fffe3595a7\" returns successfully"
Jan 14 13:10:32.563253 containerd[1726]: time="2025-01-14T13:10:32.563226683Z" level=info msg="RemovePodSandbox for \"16bde7a5451fa22951a9b2ecdeb2565561222e8b99b40b7909fab4fffe3595a7\""
Jan 14 13:10:32.563361 containerd[1726]: time="2025-01-14T13:10:32.563254883Z" level=info msg="Forcibly stopping sandbox \"16bde7a5451fa22951a9b2ecdeb2565561222e8b99b40b7909fab4fffe3595a7\""
Jan 14 13:10:32.563404 containerd[1726]: time="2025-01-14T13:10:32.563348484Z" level=info msg="TearDown network for sandbox \"16bde7a5451fa22951a9b2ecdeb2565561222e8b99b40b7909fab4fffe3595a7\" successfully"
Jan 14 13:10:32.576517 containerd[1726]: time="2025-01-14T13:10:32.576460930Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"16bde7a5451fa22951a9b2ecdeb2565561222e8b99b40b7909fab4fffe3595a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 14 13:10:32.576803 containerd[1726]: time="2025-01-14T13:10:32.576542931Z" level=info msg="RemovePodSandbox \"16bde7a5451fa22951a9b2ecdeb2565561222e8b99b40b7909fab4fffe3595a7\" returns successfully"
Jan 14 13:10:33.373433 kubelet[2558]: E0114 13:10:33.373375    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:34.374155 kubelet[2558]: E0114 13:10:34.374100    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:35.374482 kubelet[2558]: E0114 13:10:35.374403    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:36.374810 kubelet[2558]: E0114 13:10:36.374742    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:37.375182 kubelet[2558]: E0114 13:10:37.375114    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:38.375935 kubelet[2558]: E0114 13:10:38.375869    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:39.376833 kubelet[2558]: E0114 13:10:39.376763    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:40.377319 kubelet[2558]: E0114 13:10:40.377222    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:41.377509 kubelet[2558]: E0114 13:10:41.377439    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:42.378436 kubelet[2558]: E0114 13:10:42.378375    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:43.379258 kubelet[2558]: E0114 13:10:43.379194    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:44.379742 kubelet[2558]: E0114 13:10:44.379675    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:45.380259 kubelet[2558]: E0114 13:10:45.380194    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:46.380790 kubelet[2558]: E0114 13:10:46.380724    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:47.381899 kubelet[2558]: E0114 13:10:47.381814    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:48.382067 kubelet[2558]: E0114 13:10:48.382001    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:49.114405 kubelet[2558]: I0114 13:10:49.114337    2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=26.722505907 podStartE2EDuration="37.114318174s" podCreationTimestamp="2025-01-14 13:10:12 +0000 UTC" firstStartedPulling="2025-01-14 13:10:13.491497191 +0000 UTC m=+42.249432608" lastFinishedPulling="2025-01-14 13:10:23.883309358 +0000 UTC m=+52.641244875" observedRunningTime="2025-01-14 13:10:24.652728078 +0000 UTC m=+53.410663595" watchObservedRunningTime="2025-01-14 13:10:49.114318174 +0000 UTC m=+77.872253591"
Jan 14 13:10:49.114743 kubelet[2558]: I0114 13:10:49.114706    2558 topology_manager.go:215] "Topology Admit Handler" podUID="82eb6380-a5c8-4cb7-8949-9482410ba274" podNamespace="default" podName="test-pod-1"
Jan 14 13:10:49.121677 systemd[1]: Created slice kubepods-besteffort-pod82eb6380_a5c8_4cb7_8949_9482410ba274.slice - libcontainer container kubepods-besteffort-pod82eb6380_a5c8_4cb7_8949_9482410ba274.slice.
Jan 14 13:10:49.308420 kubelet[2558]: I0114 13:10:49.308261    2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6ad24cc6-9859-4b1f-bee6-6ae70e1092ea\" (UniqueName: \"kubernetes.io/nfs/82eb6380-a5c8-4cb7-8949-9482410ba274-pvc-6ad24cc6-9859-4b1f-bee6-6ae70e1092ea\") pod \"test-pod-1\" (UID: \"82eb6380-a5c8-4cb7-8949-9482410ba274\") " pod="default/test-pod-1"
Jan 14 13:10:49.308420 kubelet[2558]: I0114 13:10:49.308354    2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p575\" (UniqueName: \"kubernetes.io/projected/82eb6380-a5c8-4cb7-8949-9482410ba274-kube-api-access-8p575\") pod \"test-pod-1\" (UID: \"82eb6380-a5c8-4cb7-8949-9482410ba274\") " pod="default/test-pod-1"
Jan 14 13:10:49.383309 kubelet[2558]: E0114 13:10:49.383118    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:49.522326 kernel: FS-Cache: Loaded
Jan 14 13:10:49.636680 kernel: RPC: Registered named UNIX socket transport module.
Jan 14 13:10:49.636816 kernel: RPC: Registered udp transport module.
Jan 14 13:10:49.636836 kernel: RPC: Registered tcp transport module.
Jan 14 13:10:49.640199 kernel: RPC: Registered tcp-with-tls transport module.
Jan 14 13:10:49.640319 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 14 13:10:49.980969 kernel: NFS: Registering the id_resolver key type
Jan 14 13:10:49.981111 kernel: Key type id_resolver registered
Jan 14 13:10:49.981131 kernel: Key type id_legacy registered
Jan 14 13:10:50.112802 nfsidmap[4326]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '1.0-a-6f4e4149be'
Jan 14 13:10:50.133636 nfsidmap[4327]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '1.0-a-6f4e4149be'
Jan 14 13:10:50.324820 containerd[1726]: time="2025-01-14T13:10:50.324764862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:82eb6380-a5c8-4cb7-8949-9482410ba274,Namespace:default,Attempt:0,}"
Jan 14 13:10:50.383779 kubelet[2558]: E0114 13:10:50.383389    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:50.474976 systemd-networkd[1336]: cali5ec59c6bf6e: Link UP
Jan 14 13:10:50.476506 systemd-networkd[1336]: cali5ec59c6bf6e: Gained carrier
Jan 14 13:10:50.488771 containerd[1726]: 2025-01-14 13:10:50.395 [INFO][4328] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.8.19-k8s-test--pod--1-eth0  default  82eb6380-a5c8-4cb7-8949-9482410ba274 1534 0 2025-01-14 13:10:14 +0000 UTC <nil> <nil> map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s  10.200.8.19  test-pod-1 eth0 default [] []   [kns.default ksa.default.default] cali5ec59c6bf6e  [] []}} ContainerID="448ced0fa5daf0db1f1d39248c66bd38c2a99be4b11b6c9af057d06441fa45a0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.19-k8s-test--pod--1-"
Jan 14 13:10:50.488771 containerd[1726]: 2025-01-14 13:10:50.395 [INFO][4328] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="448ced0fa5daf0db1f1d39248c66bd38c2a99be4b11b6c9af057d06441fa45a0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.19-k8s-test--pod--1-eth0"
Jan 14 13:10:50.488771 containerd[1726]: 2025-01-14 13:10:50.423 [INFO][4339] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="448ced0fa5daf0db1f1d39248c66bd38c2a99be4b11b6c9af057d06441fa45a0" HandleID="k8s-pod-network.448ced0fa5daf0db1f1d39248c66bd38c2a99be4b11b6c9af057d06441fa45a0" Workload="10.200.8.19-k8s-test--pod--1-eth0"
Jan 14 13:10:50.488771 containerd[1726]: 2025-01-14 13:10:50.434 [INFO][4339] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="448ced0fa5daf0db1f1d39248c66bd38c2a99be4b11b6c9af057d06441fa45a0" HandleID="k8s-pod-network.448ced0fa5daf0db1f1d39248c66bd38c2a99be4b11b6c9af057d06441fa45a0" Workload="10.200.8.19-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004d2b60), Attrs:map[string]string{"namespace":"default", "node":"10.200.8.19", "pod":"test-pod-1", "timestamp":"2025-01-14 13:10:50.423959444 +0000 UTC"}, Hostname:"10.200.8.19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Jan 14 13:10:50.488771 containerd[1726]: 2025-01-14 13:10:50.434 [INFO][4339] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Jan 14 13:10:50.488771 containerd[1726]: 2025-01-14 13:10:50.435 [INFO][4339] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Jan 14 13:10:50.488771 containerd[1726]: 2025-01-14 13:10:50.435 [INFO][4339] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.8.19'
Jan 14 13:10:50.488771 containerd[1726]: 2025-01-14 13:10:50.436 [INFO][4339] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.448ced0fa5daf0db1f1d39248c66bd38c2a99be4b11b6c9af057d06441fa45a0" host="10.200.8.19"
Jan 14 13:10:50.488771 containerd[1726]: 2025-01-14 13:10:50.442 [INFO][4339] ipam/ipam.go 372: Looking up existing affinities for host host="10.200.8.19"
Jan 14 13:10:50.488771 containerd[1726]: 2025-01-14 13:10:50.446 [INFO][4339] ipam/ipam.go 489: Trying affinity for 192.168.41.128/26 host="10.200.8.19"
Jan 14 13:10:50.488771 containerd[1726]: 2025-01-14 13:10:50.448 [INFO][4339] ipam/ipam.go 155: Attempting to load block cidr=192.168.41.128/26 host="10.200.8.19"
Jan 14 13:10:50.488771 containerd[1726]: 2025-01-14 13:10:50.450 [INFO][4339] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.41.128/26 host="10.200.8.19"
Jan 14 13:10:50.488771 containerd[1726]: 2025-01-14 13:10:50.450 [INFO][4339] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.41.128/26 handle="k8s-pod-network.448ced0fa5daf0db1f1d39248c66bd38c2a99be4b11b6c9af057d06441fa45a0" host="10.200.8.19"
Jan 14 13:10:50.488771 containerd[1726]: 2025-01-14 13:10:50.452 [INFO][4339] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.448ced0fa5daf0db1f1d39248c66bd38c2a99be4b11b6c9af057d06441fa45a0
Jan 14 13:10:50.488771 containerd[1726]: 2025-01-14 13:10:50.458 [INFO][4339] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.41.128/26 handle="k8s-pod-network.448ced0fa5daf0db1f1d39248c66bd38c2a99be4b11b6c9af057d06441fa45a0" host="10.200.8.19"
Jan 14 13:10:50.488771 containerd[1726]: 2025-01-14 13:10:50.469 [INFO][4339] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.41.132/26] block=192.168.41.128/26 handle="k8s-pod-network.448ced0fa5daf0db1f1d39248c66bd38c2a99be4b11b6c9af057d06441fa45a0" host="10.200.8.19"
Jan 14 13:10:50.488771 containerd[1726]: 2025-01-14 13:10:50.469 [INFO][4339] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.41.132/26] handle="k8s-pod-network.448ced0fa5daf0db1f1d39248c66bd38c2a99be4b11b6c9af057d06441fa45a0" host="10.200.8.19"
Jan 14 13:10:50.488771 containerd[1726]: 2025-01-14 13:10:50.469 [INFO][4339] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Jan 14 13:10:50.488771 containerd[1726]: 2025-01-14 13:10:50.469 [INFO][4339] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.132/26] IPv6=[] ContainerID="448ced0fa5daf0db1f1d39248c66bd38c2a99be4b11b6c9af057d06441fa45a0" HandleID="k8s-pod-network.448ced0fa5daf0db1f1d39248c66bd38c2a99be4b11b6c9af057d06441fa45a0" Workload="10.200.8.19-k8s-test--pod--1-eth0"
Jan 14 13:10:50.488771 containerd[1726]: 2025-01-14 13:10:50.471 [INFO][4328] cni-plugin/k8s.go 386: Populated endpoint ContainerID="448ced0fa5daf0db1f1d39248c66bd38c2a99be4b11b6c9af057d06441fa45a0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.19-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"82eb6380-a5c8-4cb7-8949-9482410ba274", ResourceVersion:"1534", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 10, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.19", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.41.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 14 13:10:50.491049 containerd[1726]: 2025-01-14 13:10:50.471 [INFO][4328] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.41.132/32] ContainerID="448ced0fa5daf0db1f1d39248c66bd38c2a99be4b11b6c9af057d06441fa45a0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.19-k8s-test--pod--1-eth0"
Jan 14 13:10:50.491049 containerd[1726]: 2025-01-14 13:10:50.471 [INFO][4328] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="448ced0fa5daf0db1f1d39248c66bd38c2a99be4b11b6c9af057d06441fa45a0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.19-k8s-test--pod--1-eth0"
Jan 14 13:10:50.491049 containerd[1726]: 2025-01-14 13:10:50.476 [INFO][4328] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="448ced0fa5daf0db1f1d39248c66bd38c2a99be4b11b6c9af057d06441fa45a0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.19-k8s-test--pod--1-eth0"
Jan 14 13:10:50.491049 containerd[1726]: 2025-01-14 13:10:50.476 [INFO][4328] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="448ced0fa5daf0db1f1d39248c66bd38c2a99be4b11b6c9af057d06441fa45a0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.19-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"82eb6380-a5c8-4cb7-8949-9482410ba274", ResourceVersion:"1534", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 13, 10, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.8.19", ContainerID:"448ced0fa5daf0db1f1d39248c66bd38c2a99be4b11b6c9af057d06441fa45a0", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.41.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"2a:c7:46:8f:d8:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 14 13:10:50.491049 containerd[1726]: 2025-01-14 13:10:50.487 [INFO][4328] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="448ced0fa5daf0db1f1d39248c66bd38c2a99be4b11b6c9af057d06441fa45a0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.8.19-k8s-test--pod--1-eth0"
Jan 14 13:10:50.518381 containerd[1726]: time="2025-01-14T13:10:50.518234178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 14 13:10:50.518381 containerd[1726]: time="2025-01-14T13:10:50.518307879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 14 13:10:50.518381 containerd[1726]: time="2025-01-14T13:10:50.518324779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 14 13:10:50.518647 containerd[1726]: time="2025-01-14T13:10:50.518412480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 14 13:10:50.542501 systemd[1]: run-containerd-runc-k8s.io-448ced0fa5daf0db1f1d39248c66bd38c2a99be4b11b6c9af057d06441fa45a0-runc.LZCtVh.mount: Deactivated successfully.
Jan 14 13:10:50.548458 systemd[1]: Started cri-containerd-448ced0fa5daf0db1f1d39248c66bd38c2a99be4b11b6c9af057d06441fa45a0.scope - libcontainer container 448ced0fa5daf0db1f1d39248c66bd38c2a99be4b11b6c9af057d06441fa45a0.
Jan 14 13:10:50.590388 containerd[1726]: time="2025-01-14T13:10:50.590347192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:82eb6380-a5c8-4cb7-8949-9482410ba274,Namespace:default,Attempt:0,} returns sandbox id \"448ced0fa5daf0db1f1d39248c66bd38c2a99be4b11b6c9af057d06441fa45a0\""
Jan 14 13:10:50.592327 containerd[1726]: time="2025-01-14T13:10:50.592277711Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Jan 14 13:10:50.980582 containerd[1726]: time="2025-01-14T13:10:50.980522056Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 14 13:10:50.985628 containerd[1726]: time="2025-01-14T13:10:50.985553006Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61"
Jan 14 13:10:50.991334 containerd[1726]: time="2025-01-14T13:10:50.989674447Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 397.177634ms"
Jan 14 13:10:50.991334 containerd[1726]: time="2025-01-14T13:10:50.989723147Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\""
Jan 14 13:10:50.995234 containerd[1726]: time="2025-01-14T13:10:50.995192401Z" level=info msg="CreateContainer within sandbox \"448ced0fa5daf0db1f1d39248c66bd38c2a99be4b11b6c9af057d06441fa45a0\" for container &ContainerMetadata{Name:test,Attempt:0,}"
Jan 14 13:10:51.036866 containerd[1726]: time="2025-01-14T13:10:51.036819514Z" level=info msg="CreateContainer within sandbox \"448ced0fa5daf0db1f1d39248c66bd38c2a99be4b11b6c9af057d06441fa45a0\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"65e10c383ffad86b30ac6f15dd695be136fafb2e205f75d6801e093c4ba5956e\""
Jan 14 13:10:51.037460 containerd[1726]: time="2025-01-14T13:10:51.037397419Z" level=info msg="StartContainer for \"65e10c383ffad86b30ac6f15dd695be136fafb2e205f75d6801e093c4ba5956e\""
Jan 14 13:10:51.064473 systemd[1]: Started cri-containerd-65e10c383ffad86b30ac6f15dd695be136fafb2e205f75d6801e093c4ba5956e.scope - libcontainer container 65e10c383ffad86b30ac6f15dd695be136fafb2e205f75d6801e093c4ba5956e.
Jan 14 13:10:51.093715 containerd[1726]: time="2025-01-14T13:10:51.093668377Z" level=info msg="StartContainer for \"65e10c383ffad86b30ac6f15dd695be136fafb2e205f75d6801e093c4ba5956e\" returns successfully"
Jan 14 13:10:51.383594 kubelet[2558]: E0114 13:10:51.383528    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:51.673580 systemd-networkd[1336]: cali5ec59c6bf6e: Gained IPv6LL
Jan 14 13:10:52.330510 kubelet[2558]: E0114 13:10:52.330442    2558 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:52.383872 kubelet[2558]: E0114 13:10:52.383816    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:53.384203 kubelet[2558]: E0114 13:10:53.384140    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:54.384868 kubelet[2558]: E0114 13:10:54.384804    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 14 13:10:55.385307 kubelet[2558]: E0114 13:10:55.385222    2558 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"