Dec 13 13:29:36.036955 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 13 11:52:04 -00 2024 Dec 13 13:29:36.036991 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:29:36.037005 kernel: BIOS-provided physical RAM map: Dec 13 13:29:36.037016 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 13:29:36.037027 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Dec 13 13:29:36.037037 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Dec 13 13:29:36.037051 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Dec 13 13:29:36.037062 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Dec 13 13:29:36.037076 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Dec 13 13:29:36.037087 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Dec 13 13:29:36.037098 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Dec 13 13:29:36.037108 kernel: printk: bootconsole [earlyser0] enabled Dec 13 13:29:36.037119 kernel: NX (Execute Disable) protection: active Dec 13 13:29:36.037131 kernel: APIC: Static calls initialized Dec 13 13:29:36.037147 kernel: efi: EFI v2.7 by Microsoft Dec 13 13:29:36.037160 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 RNG=0x3ffd1018 Dec 13 13:29:36.037172 kernel: random: crng init done Dec 13 13:29:36.037185 kernel: secureboot: Secure boot disabled Dec 13 13:29:36.037197 kernel: SMBIOS 3.1.0 present. Dec 13 13:29:36.037209 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Dec 13 13:29:36.037222 kernel: Hypervisor detected: Microsoft Hyper-V Dec 13 13:29:36.037234 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Dec 13 13:29:36.037246 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Dec 13 13:29:36.037259 kernel: Hyper-V: Nested features: 0x1e0101 Dec 13 13:29:36.037273 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Dec 13 13:29:36.037286 kernel: Hyper-V: Using hypercall for remote TLB flush Dec 13 13:29:36.037298 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 13 13:29:36.037311 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 13 13:29:36.037324 kernel: tsc: Marking TSC unstable due to running on Hyper-V Dec 13 13:29:36.037337 kernel: tsc: Detected 2593.903 MHz processor Dec 13 13:29:36.037350 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 13:29:36.037362 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 13:29:36.037375 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Dec 13 13:29:36.037390 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 13 13:29:36.037403 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 13:29:36.037416 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Dec 13 13:29:36.037428 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Dec 13 13:29:36.037441 kernel: Using GB pages for direct mapping Dec 13 13:29:36.037454 kernel: ACPI: Early table checksum verification disabled Dec 13 13:29:36.037467 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Dec 13 13:29:36.037484 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:29:36.037500 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:29:36.037533 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Dec 13 13:29:36.037546 kernel: ACPI: FACS 0x000000003FFFE000 000040 Dec 13 13:29:36.037560 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:29:36.037574 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:29:36.037587 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:29:36.037604 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:29:36.037616 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:29:36.037630 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:29:36.037644 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:29:36.037657 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Dec 13 13:29:36.037671 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Dec 13 13:29:36.037684 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Dec 13 13:29:36.037698 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Dec 13 13:29:36.037711 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Dec 13 13:29:36.037727 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Dec 13 13:29:36.037740 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Dec 13 13:29:36.037754 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Dec 13 13:29:36.037767 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Dec 13 13:29:36.037781 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Dec 13 13:29:36.037795 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 13:29:36.037808 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 13:29:36.037821 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 13 13:29:36.037835 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Dec 13 13:29:36.037850 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Dec 13 13:29:36.037863 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 13 13:29:36.037878 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 13 13:29:36.037891 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 13 13:29:36.037905 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 13 13:29:36.037918 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 13 13:29:36.037932 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 13 13:29:36.037945 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 13 13:29:36.037961 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 13 13:29:36.037974 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 13 13:29:36.037988 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Dec 13 13:29:36.038002 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Dec 13 13:29:36.038015 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Dec 13 13:29:36.038029 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Dec 13 13:29:36.038042 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Dec 13 13:29:36.038056 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Dec 13 13:29:36.038070 kernel: Zone ranges: Dec 13 13:29:36.038085 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 13:29:36.038099 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 13:29:36.038112 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 13:29:36.038125 kernel: Movable zone start for each node Dec 13 13:29:36.038139 kernel: Early memory node ranges Dec 13 13:29:36.038152 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 13:29:36.038166 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Dec 13 13:29:36.038179 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Dec 13 13:29:36.038192 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 13:29:36.038208 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Dec 13 13:29:36.038221 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 13:29:36.038235 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 13:29:36.038252 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Dec 13 13:29:36.038265 kernel: ACPI: PM-Timer IO Port: 0x408 Dec 13 13:29:36.038278 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Dec 13 13:29:36.038291 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Dec 13 13:29:36.038305 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 13:29:36.038319 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 13:29:36.038335 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Dec 13 13:29:36.038348 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 13:29:36.038362 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Dec 13 13:29:36.038375 kernel: Booting paravirtualized kernel on Hyper-V Dec 13 13:29:36.038389 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 13:29:36.038403 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 13:29:36.038416 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 13:29:36.038430 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 13:29:36.038443 kernel: pcpu-alloc: [0] 0 1 Dec 13 13:29:36.038459 kernel: Hyper-V: PV spinlocks enabled Dec 13 13:29:36.038472 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 13:29:36.038488 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:29:36.038503 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:29:36.038525 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 13:29:36.038536 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 13:29:36.038548 kernel: Fallback order for Node 0: 0 Dec 13 13:29:36.038560 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Dec 13 13:29:36.038574 kernel: Policy zone: Normal Dec 13 13:29:36.038696 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:29:36.038710 kernel: software IO TLB: area num 2. Dec 13 13:29:36.038730 kernel: Memory: 8075040K/8387460K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43328K init, 1748K bss, 312164K reserved, 0K cma-reserved) Dec 13 13:29:36.038741 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 13:29:36.038752 kernel: ftrace: allocating 37874 entries in 148 pages Dec 13 13:29:36.038760 kernel: ftrace: allocated 148 pages with 3 groups Dec 13 13:29:36.040847 kernel: Dynamic Preempt: voluntary Dec 13 13:29:36.040868 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:29:36.040883 kernel: rcu: RCU event tracing is enabled. Dec 13 13:29:36.040897 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 13:29:36.040917 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:29:36.040930 kernel: Rude variant of Tasks RCU enabled. Dec 13 13:29:36.040944 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:29:36.040958 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:29:36.040972 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 13:29:36.040985 kernel: Using NULL legacy PIC Dec 13 13:29:36.041001 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Dec 13 13:29:36.041015 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 13:29:36.041029 kernel: Console: colour dummy device 80x25 Dec 13 13:29:36.041042 kernel: printk: console [tty1] enabled Dec 13 13:29:36.041056 kernel: printk: console [ttyS0] enabled Dec 13 13:29:36.041070 kernel: printk: bootconsole [earlyser0] disabled Dec 13 13:29:36.041083 kernel: ACPI: Core revision 20230628 Dec 13 13:29:36.041097 kernel: Failed to register legacy timer interrupt Dec 13 13:29:36.041111 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 13:29:36.041126 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 13 13:29:36.041140 kernel: Hyper-V: Using IPI hypercalls Dec 13 13:29:36.041153 kernel: APIC: send_IPI() replaced with hv_send_ipi() Dec 13 13:29:36.041167 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Dec 13 13:29:36.041181 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Dec 13 13:29:36.041195 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Dec 13 13:29:36.041208 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Dec 13 13:29:36.041222 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Dec 13 13:29:36.041236 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.80 BogoMIPS (lpj=2593903) Dec 13 13:29:36.041252 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 13:29:36.041266 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 13:29:36.041279 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 13:29:36.041293 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 13:29:36.041306 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 13:29:36.041320 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 13:29:36.041334 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 13:29:36.041347 kernel: RETBleed: Vulnerable Dec 13 13:29:36.041360 kernel: Speculative Store Bypass: Vulnerable Dec 13 13:29:36.041374 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 13:29:36.041390 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 13:29:36.041403 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 13:29:36.041417 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 13:29:36.041430 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 13:29:36.041444 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 13:29:36.041457 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 13:29:36.041471 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 13:29:36.041484 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 13:29:36.041498 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 13:29:36.041511 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 13 13:29:36.041532 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 13 13:29:36.041548 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 13 13:29:36.041562 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Dec 13 13:29:36.041575 kernel: Freeing SMP alternatives memory: 32K Dec 13 13:29:36.041588 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:29:36.041602 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:29:36.041616 kernel: landlock: Up and running. Dec 13 13:29:36.041629 kernel: SELinux: Initializing. Dec 13 13:29:36.041643 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 13:29:36.041656 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 13:29:36.041670 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 13:29:36.041683 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:29:36.041700 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:29:36.041714 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:29:36.041727 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 13:29:36.041741 kernel: signal: max sigframe size: 3632 Dec 13 13:29:36.041755 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:29:36.041769 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:29:36.041782 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 13:29:36.041796 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:29:36.041810 kernel: smpboot: x86: Booting SMP configuration: Dec 13 13:29:36.041826 kernel: .... node #0, CPUs: #1 Dec 13 13:29:36.041840 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Dec 13 13:29:36.041855 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 13:29:36.041868 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 13:29:36.041882 kernel: smpboot: Max logical packages: 1 Dec 13 13:29:36.041895 kernel: smpboot: Total of 2 processors activated (10375.61 BogoMIPS) Dec 13 13:29:36.041909 kernel: devtmpfs: initialized Dec 13 13:29:36.041923 kernel: x86/mm: Memory block size: 128MB Dec 13 13:29:36.041936 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Dec 13 13:29:36.041953 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:29:36.041967 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 13:29:36.041981 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:29:36.041994 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:29:36.042008 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:29:36.042022 kernel: audit: type=2000 audit(1734096574.027:1): state=initialized audit_enabled=0 res=1 Dec 13 13:29:36.042035 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:29:36.042048 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 13:29:36.042062 kernel: cpuidle: using governor menu Dec 13 13:29:36.042078 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:29:36.042091 kernel: dca service started, version 1.12.1 Dec 13 13:29:36.042105 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Dec 13 13:29:36.042118 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 13:29:36.042132 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:29:36.042146 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 13:29:36.042159 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:29:36.042173 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:29:36.042189 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:29:36.042202 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:29:36.042216 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:29:36.042229 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:29:36.042243 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:29:36.042257 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 13:29:36.042270 kernel: ACPI: Interpreter enabled Dec 13 13:29:36.042284 kernel: ACPI: PM: (supports S0 S5) Dec 13 13:29:36.042297 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 13:29:36.042311 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 13:29:36.042327 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 13 13:29:36.042341 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Dec 13 13:29:36.042354 kernel: iommu: Default domain type: Translated Dec 13 13:29:36.042368 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 13:29:36.042382 kernel: efivars: Registered efivars operations Dec 13 13:29:36.042395 kernel: PCI: Using ACPI for IRQ routing Dec 13 13:29:36.042409 kernel: PCI: System does not support PCI Dec 13 13:29:36.042422 kernel: vgaarb: loaded Dec 13 13:29:36.042436 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Dec 13 13:29:36.042452 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:29:36.042465 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:29:36.042479 kernel: pnp: PnP ACPI init Dec 13 13:29:36.042493 kernel: pnp: PnP ACPI: found 3 devices Dec 13 13:29:36.042507 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 13:29:36.042527 kernel: NET: Registered PF_INET protocol family Dec 13 13:29:36.042541 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 13:29:36.042555 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 13:29:36.042569 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:29:36.042586 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 13:29:36.042599 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 13 13:29:36.042613 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 13:29:36.042625 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 13:29:36.042640 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 13:29:36.042655 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:29:36.042669 kernel: NET: Registered PF_XDP protocol family Dec 13 13:29:36.042683 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:29:36.042701 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 13:29:36.042715 kernel: software IO TLB: mapped [mem 0x000000003ae83000-0x000000003ee83000] (64MB) Dec 13 13:29:36.042730 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 13:29:36.042745 kernel: Initialise system trusted keyrings Dec 13 13:29:36.042758 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 13:29:36.042773 kernel: Key type asymmetric registered Dec 13 13:29:36.042788 kernel: Asymmetric key parser 'x509' registered Dec 13 13:29:36.042803 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 13:29:36.042817 kernel: io scheduler mq-deadline registered Dec 13 13:29:36.042833 kernel: io scheduler kyber registered Dec 13 13:29:36.042847 kernel: io scheduler bfq registered Dec 13 13:29:36.042861 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 13:29:36.042875 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:29:36.042889 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 13:29:36.042903 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 13:29:36.042918 kernel: i8042: PNP: No PS/2 controller found. Dec 13 13:29:36.043094 kernel: rtc_cmos 00:02: registered as rtc0 Dec 13 13:29:36.043223 kernel: rtc_cmos 00:02: setting system clock to 2024-12-13T13:29:35 UTC (1734096575) Dec 13 13:29:36.043335 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Dec 13 13:29:36.043353 kernel: intel_pstate: CPU model not supported Dec 13 13:29:36.043368 kernel: efifb: probing for efifb Dec 13 13:29:36.043382 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 13:29:36.043397 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 13:29:36.043411 kernel: efifb: scrolling: redraw Dec 13 13:29:36.043425 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 13:29:36.043438 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 13:29:36.043457 kernel: fb0: EFI VGA frame buffer device Dec 13 13:29:36.043470 kernel: pstore: Using crash dump compression: deflate Dec 13 13:29:36.043486 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 13:29:36.043500 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:29:36.043528 kernel: Segment Routing with IPv6 Dec 13 13:29:36.043541 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:29:36.043551 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:29:36.043559 kernel: Key type dns_resolver registered Dec 13 13:29:36.043569 kernel: IPI shorthand broadcast: enabled Dec 13 13:29:36.043586 kernel: sched_clock: Marking stable (762002900, 39881000)->(989777600, -187893700) Dec 13 13:29:36.043599 kernel: registered taskstats version 1 Dec 13 13:29:36.043613 kernel: Loading compiled-in X.509 certificates Dec 13 13:29:36.043626 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 87a680e70013684f1bdd04e047addefc714bd162' Dec 13 13:29:36.043639 kernel: Key type .fscrypt registered Dec 13 13:29:36.043652 kernel: Key type fscrypt-provisioning registered Dec 13 13:29:36.043666 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:29:36.043680 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:29:36.043694 kernel: ima: No architecture policies found Dec 13 13:29:36.043712 kernel: clk: Disabling unused clocks Dec 13 13:29:36.043727 kernel: Freeing unused kernel image (initmem) memory: 43328K Dec 13 13:29:36.043741 kernel: Write protecting the kernel read-only data: 38912k Dec 13 13:29:36.043756 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Dec 13 13:29:36.043771 kernel: Run /init as init process Dec 13 13:29:36.043786 kernel: with arguments: Dec 13 13:29:36.043799 kernel: /init Dec 13 13:29:36.043812 kernel: with environment: Dec 13 13:29:36.043823 kernel: HOME=/ Dec 13 13:29:36.043838 kernel: TERM=linux Dec 13 13:29:36.043853 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:29:36.043871 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:29:36.043889 systemd[1]: Detected virtualization microsoft. Dec 13 13:29:36.043905 systemd[1]: Detected architecture x86-64. Dec 13 13:29:36.043920 systemd[1]: Running in initrd. Dec 13 13:29:36.043936 systemd[1]: No hostname configured, using default hostname. Dec 13 13:29:36.043953 systemd[1]: Hostname set to . Dec 13 13:29:36.043973 systemd[1]: Initializing machine ID from random generator. Dec 13 13:29:36.043989 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:29:36.044006 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:29:36.044022 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:29:36.044039 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:29:36.044055 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:29:36.044071 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:29:36.044090 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:29:36.044105 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:29:36.044120 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:29:36.044135 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:29:36.044148 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:29:36.044162 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:29:36.044175 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:29:36.044190 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:29:36.044201 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:29:36.044212 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:29:36.044220 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:29:36.044232 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:29:36.044240 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:29:36.044249 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:29:36.044260 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:29:36.044271 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:29:36.044284 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:29:36.044294 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:29:36.044305 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:29:36.044313 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:29:36.044322 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:29:36.044333 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:29:36.044341 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:29:36.044350 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:29:36.044376 systemd-journald[177]: Collecting audit messages is disabled. Dec 13 13:29:36.044399 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:29:36.044408 systemd-journald[177]: Journal started Dec 13 13:29:36.044429 systemd-journald[177]: Runtime Journal (/run/log/journal/685b7dc74f914ef4802cb37f235d1f30) is 8.0M, max 158.8M, 150.8M free. Dec 13 13:29:36.055534 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:29:36.054165 systemd-modules-load[178]: Inserted module 'overlay' Dec 13 13:29:36.056733 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:29:36.060930 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:29:36.075692 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:29:36.087643 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:29:36.088104 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:29:36.091644 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:29:36.107323 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:29:36.121685 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:29:36.126428 systemd-modules-load[178]: Inserted module 'br_netfilter' Dec 13 13:29:36.128577 kernel: Bridge firewalling registered Dec 13 13:29:36.128778 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:29:36.134830 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:29:36.136128 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:29:36.140647 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:29:36.149751 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:29:36.155661 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:29:36.167701 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:29:36.173169 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:29:36.184868 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:29:36.202193 dracut-cmdline[215]: dracut-dracut-053 Dec 13 13:29:36.205999 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:29:36.223251 systemd-resolved[209]: Positive Trust Anchors: Dec 13 13:29:36.223267 systemd-resolved[209]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:29:36.223327 systemd-resolved[209]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:29:36.229628 systemd-resolved[209]: Defaulting to hostname 'linux'. Dec 13 13:29:36.230606 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:29:36.246682 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:29:36.302537 kernel: SCSI subsystem initialized Dec 13 13:29:36.312534 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:29:36.323534 kernel: iscsi: registered transport (tcp) Dec 13 13:29:36.343578 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:29:36.343627 kernel: QLogic iSCSI HBA Driver Dec 13 13:29:36.377897 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:29:36.387680 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:29:36.413634 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:29:36.413703 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:29:36.416563 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:29:36.455534 kernel: raid6: avx512x4 gen() 18963 MB/s Dec 13 13:29:36.474530 kernel: raid6: avx512x2 gen() 18763 MB/s Dec 13 13:29:36.492533 kernel: raid6: avx512x1 gen() 18626 MB/s Dec 13 13:29:36.511528 kernel: raid6: avx2x4 gen() 18697 MB/s Dec 13 13:29:36.530528 kernel: raid6: avx2x2 gen() 18679 MB/s Dec 13 13:29:36.550497 kernel: raid6: avx2x1 gen() 13930 MB/s Dec 13 13:29:36.550541 kernel: raid6: using algorithm avx512x4 gen() 18963 MB/s Dec 13 13:29:36.571110 kernel: raid6: .... xor() 7204 MB/s, rmw enabled Dec 13 13:29:36.571144 kernel: raid6: using avx512x2 recovery algorithm Dec 13 13:29:36.592538 kernel: xor: automatically using best checksumming function avx Dec 13 13:29:36.728546 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:29:36.737472 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:29:36.751655 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:29:36.767082 systemd-udevd[398]: Using default interface naming scheme 'v255'. Dec 13 13:29:36.771372 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:29:36.783736 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:29:36.799016 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Dec 13 13:29:36.823867 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:29:36.832649 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:29:36.871591 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:29:36.887823 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:29:36.911978 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:29:36.918329 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:29:36.924736 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:29:36.930274 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:29:36.940659 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:29:36.958831 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:29:36.962296 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 13:29:36.990501 kernel: hv_vmbus: Vmbus version:5.2 Dec 13 13:29:36.983042 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:29:37.010490 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 13:29:37.010526 kernel: AES CTR mode by8 optimization enabled Dec 13 13:29:36.983266 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:29:37.022814 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 13:29:37.022841 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 13:29:37.001447 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:29:37.004704 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:29:37.004886 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:29:37.005068 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:29:37.024271 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:29:37.045046 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 13:29:37.044162 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:29:37.044287 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:29:37.054300 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:29:37.072622 kernel: PTP clock support registered Dec 13 13:29:37.081115 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 13:29:37.081163 kernel: hv_vmbus: registering driver hv_utils Dec 13 13:29:37.083726 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 13:29:37.085596 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 13:29:37.087471 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 13:29:37.129698 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Dec 13 13:29:37.125426 systemd-resolved[209]: Clock change detected. Flushing caches. Dec 13 13:29:37.140809 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 13:29:37.143758 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 13:29:37.152454 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 13:29:37.153090 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:29:37.181569 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 13:29:37.181600 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Dec 13 13:29:37.181618 kernel: scsi host0: storvsc_host_t Dec 13 13:29:37.181888 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 13:29:37.181939 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 13:29:37.182100 kernel: scsi host1: storvsc_host_t Dec 13 13:29:37.182417 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:29:37.190054 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 13:29:37.208095 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 13:29:37.210356 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 13:29:37.210378 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 13:29:37.223903 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:29:37.242935 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 13:29:37.263310 kernel: hv_netvsc 000d3ab6-bc1e-000d-3ab6-bc1e000d3ab6 eth0: VF slot 1 added Dec 13 13:29:37.263508 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 13:29:37.263688 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 13:29:37.263907 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 13:29:37.264063 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 13:29:37.264215 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:29:37.264235 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 13:29:37.273769 kernel: hv_vmbus: registering driver hv_pci Dec 13 13:29:37.318925 kernel: hv_pci 53643b0f-ed83-4701-b718-704064ee98cc: PCI VMBus probing: Using version 0x10004 Dec 13 13:29:37.386144 kernel: hv_pci 53643b0f-ed83-4701-b718-704064ee98cc: PCI host bridge to bus ed83:00 Dec 13 13:29:37.386274 kernel: pci_bus ed83:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Dec 13 13:29:37.386383 kernel: pci_bus ed83:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 13:29:37.386473 kernel: pci ed83:00:02.0: [15b3:1016] type 00 class 0x020000 Dec 13 13:29:37.386590 kernel: pci ed83:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 13:29:37.386690 kernel: pci ed83:00:02.0: enabling Extended Tags Dec 13 13:29:37.386828 kernel: pci ed83:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ed83:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Dec 13 13:29:37.387321 kernel: pci_bus ed83:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 13:29:37.387476 kernel: pci ed83:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 13:29:37.544324 kernel: mlx5_core ed83:00:02.0: enabling device (0000 -> 0002) Dec 13 13:29:37.802100 kernel: mlx5_core ed83:00:02.0: firmware version: 14.30.5000 Dec 13 13:29:37.802308 kernel: hv_netvsc 000d3ab6-bc1e-000d-3ab6-bc1e000d3ab6 eth0: VF registering: eth1 Dec 13 13:29:37.802475 kernel: mlx5_core ed83:00:02.0 eth1: joined to eth0 Dec 13 13:29:37.802659 kernel: mlx5_core ed83:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Dec 13 13:29:37.730443 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 13 13:29:37.809927 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (440) Dec 13 13:29:37.809967 kernel: mlx5_core ed83:00:02.0 enP60803s1: renamed from eth1 Dec 13 13:29:37.824834 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 13 13:29:37.837511 kernel: BTRFS: device fsid 79c74448-2326-4c98-b9ff-09542b30ea52 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (454) Dec 13 13:29:37.848727 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 13 13:29:37.860461 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 13 13:29:37.863236 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Dec 13 13:29:37.876886 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:29:37.887775 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:29:38.903782 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:29:38.903873 disk-uuid[599]: The operation has completed successfully. Dec 13 13:29:38.974953 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:29:38.975064 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:29:39.004878 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:29:39.012183 sh[685]: Success Dec 13 13:29:39.044258 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 13:29:39.241904 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:29:39.254667 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:29:39.258552 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:29:39.281763 kernel: BTRFS info (device dm-0): first mount of filesystem 79c74448-2326-4c98-b9ff-09542b30ea52 Dec 13 13:29:39.281804 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:29:39.286929 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:29:39.289421 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:29:39.291708 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:29:39.529440 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:29:39.534551 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 13:29:39.542899 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:29:39.549921 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:29:39.564738 kernel: BTRFS info (device sda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:29:39.564785 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:29:39.567675 kernel: BTRFS info (device sda6): using free space tree Dec 13 13:29:39.583787 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 13:29:39.597758 kernel: BTRFS info (device sda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:29:39.597231 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:29:39.605153 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:29:39.617330 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:29:39.655624 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:29:39.666375 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:29:39.685638 systemd-networkd[869]: lo: Link UP Dec 13 13:29:39.685647 systemd-networkd[869]: lo: Gained carrier Dec 13 13:29:39.688051 systemd-networkd[869]: Enumeration completed Dec 13 13:29:39.688466 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:29:39.695544 systemd[1]: Reached target network.target - Network. Dec 13 13:29:39.695799 systemd-networkd[869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:29:39.695804 systemd-networkd[869]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:29:39.761768 kernel: mlx5_core ed83:00:02.0 enP60803s1: Link up Dec 13 13:29:39.795085 kernel: hv_netvsc 000d3ab6-bc1e-000d-3ab6-bc1e000d3ab6 eth0: Data path switched to VF: enP60803s1 Dec 13 13:29:39.794650 systemd-networkd[869]: enP60803s1: Link UP Dec 13 13:29:39.794806 systemd-networkd[869]: eth0: Link UP Dec 13 13:29:39.794968 systemd-networkd[869]: eth0: Gained carrier Dec 13 13:29:39.794980 systemd-networkd[869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:29:39.811171 systemd-networkd[869]: enP60803s1: Gained carrier Dec 13 13:29:39.837797 systemd-networkd[869]: eth0: DHCPv4 address 10.200.8.33/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 13:29:40.373884 ignition[804]: Ignition 2.20.0 Dec 13 13:29:40.373897 ignition[804]: Stage: fetch-offline Dec 13 13:29:40.373944 ignition[804]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:29:40.373956 ignition[804]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:29:40.374076 ignition[804]: parsed url from cmdline: "" Dec 13 13:29:40.374082 ignition[804]: no config URL provided Dec 13 13:29:40.374088 ignition[804]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:29:40.374098 ignition[804]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:29:40.374105 ignition[804]: failed to fetch config: resource requires networking Dec 13 13:29:40.374418 ignition[804]: Ignition finished successfully Dec 13 13:29:40.391585 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:29:40.401932 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 13:29:40.413588 ignition[877]: Ignition 2.20.0 Dec 13 13:29:40.413600 ignition[877]: Stage: fetch Dec 13 13:29:40.413808 ignition[877]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:29:40.413821 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:29:40.415348 ignition[877]: parsed url from cmdline: "" Dec 13 13:29:40.415354 ignition[877]: no config URL provided Dec 13 13:29:40.415361 ignition[877]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:29:40.415372 ignition[877]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:29:40.416649 ignition[877]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 13:29:40.503181 ignition[877]: GET result: OK Dec 13 13:29:40.503270 ignition[877]: config has been read from IMDS userdata Dec 13 13:29:40.503311 ignition[877]: parsing config with SHA512: f71caf7c23ea2f1ce71369a041be3f8e4f68f7c00ba62e50244733c16dfa6c8ebf291981fa39a0ed22b5c308e0e7865b6829c2136c71d017449ea12403a3f764 Dec 13 13:29:40.512669 unknown[877]: fetched base config from "system" Dec 13 13:29:40.512683 unknown[877]: fetched base config from "system" Dec 13 13:29:40.513195 ignition[877]: fetch: fetch complete Dec 13 13:29:40.512692 unknown[877]: fetched user config from "azure" Dec 13 13:29:40.513200 ignition[877]: fetch: fetch passed Dec 13 13:29:40.516376 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 13:29:40.513241 ignition[877]: Ignition finished successfully Dec 13 13:29:40.529403 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:29:40.541724 ignition[883]: Ignition 2.20.0 Dec 13 13:29:40.541735 ignition[883]: Stage: kargs Dec 13 13:29:40.541979 ignition[883]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:29:40.541989 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:29:40.549693 ignition[883]: kargs: kargs passed Dec 13 13:29:40.549764 ignition[883]: Ignition finished successfully Dec 13 13:29:40.553146 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:29:40.562898 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:29:40.574529 ignition[889]: Ignition 2.20.0 Dec 13 13:29:40.574538 ignition[889]: Stage: disks Dec 13 13:29:40.577073 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:29:40.574742 ignition[889]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:29:40.580364 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:29:40.574780 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:29:40.584809 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:29:40.575587 ignition[889]: disks: disks passed Dec 13 13:29:40.587511 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:29:40.575628 ignition[889]: Ignition finished successfully Dec 13 13:29:40.591739 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:29:40.594039 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:29:40.607442 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:29:40.664115 systemd-fsck[897]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Dec 13 13:29:40.670868 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:29:40.682893 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:29:40.783765 kernel: EXT4-fs (sda9): mounted filesystem 8801d4fe-2f40-4e12-9140-c192f2e7d668 r/w with ordered data mode. Quota mode: none. Dec 13 13:29:40.784249 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:29:40.788693 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:29:40.823875 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:29:40.829029 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:29:40.835904 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 13:29:40.860924 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (908) Dec 13 13:29:40.860951 kernel: BTRFS info (device sda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:29:40.860966 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:29:40.860977 kernel: BTRFS info (device sda6): using free space tree Dec 13 13:29:40.860990 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 13:29:40.856272 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:29:40.856307 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:29:40.881033 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:29:40.885985 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:29:40.896148 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:29:41.117077 systemd-networkd[869]: enP60803s1: Gained IPv6LL Dec 13 13:29:41.479079 initrd-setup-root[933]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:29:41.509175 initrd-setup-root[940]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:29:41.515374 initrd-setup-root[947]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:29:41.521072 initrd-setup-root[954]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:29:41.542696 coreos-metadata[910]: Dec 13 13:29:41.542 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 13:29:41.548435 coreos-metadata[910]: Dec 13 13:29:41.548 INFO Fetch successful Dec 13 13:29:41.550573 coreos-metadata[910]: Dec 13 13:29:41.548 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 13:29:41.567441 coreos-metadata[910]: Dec 13 13:29:41.567 INFO Fetch successful Dec 13 13:29:41.569529 coreos-metadata[910]: Dec 13 13:29:41.568 INFO wrote hostname ci-4186.0.0-a-6a956dd616 to /sysroot/etc/hostname Dec 13 13:29:41.574655 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 13:29:41.629124 systemd-networkd[869]: eth0: Gained IPv6LL Dec 13 13:29:42.213974 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:29:42.223981 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:29:42.230924 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:29:42.238764 kernel: BTRFS info (device sda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:29:42.238528 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:29:42.266383 ignition[1026]: INFO : Ignition 2.20.0 Dec 13 13:29:42.271915 ignition[1026]: INFO : Stage: mount Dec 13 13:29:42.271915 ignition[1026]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:29:42.271915 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:29:42.271915 ignition[1026]: INFO : mount: mount passed Dec 13 13:29:42.271915 ignition[1026]: INFO : Ignition finished successfully Dec 13 13:29:42.271156 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:29:42.274342 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:29:42.290871 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:29:42.297637 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:29:42.321195 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1038) Dec 13 13:29:42.321241 kernel: BTRFS info (device sda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:29:42.324659 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:29:42.326915 kernel: BTRFS info (device sda6): using free space tree Dec 13 13:29:42.331763 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 13:29:42.332961 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:29:42.356175 ignition[1054]: INFO : Ignition 2.20.0 Dec 13 13:29:42.356175 ignition[1054]: INFO : Stage: files Dec 13 13:29:42.360171 ignition[1054]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:29:42.360171 ignition[1054]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:29:42.360171 ignition[1054]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:29:42.382948 ignition[1054]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:29:42.382948 ignition[1054]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:29:42.460434 ignition[1054]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:29:42.463966 ignition[1054]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:29:42.463966 ignition[1054]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:29:42.460961 unknown[1054]: wrote ssh authorized keys file for user: core Dec 13 13:29:42.474003 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:29:42.478206 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 13:29:42.537620 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 13:29:42.666920 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:29:42.672024 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:29:42.672024 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:29:42.672024 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:29:42.672024 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:29:42.672024 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:29:42.672024 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:29:42.672024 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:29:42.672024 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:29:42.672024 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:29:42.672024 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:29:42.672024 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 13:29:42.672024 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 13:29:42.672024 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 13:29:42.672024 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 13:29:43.195548 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 13:29:43.524460 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 13:29:43.524460 ignition[1054]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 13:29:43.538663 ignition[1054]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:29:43.543805 ignition[1054]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:29:43.543805 ignition[1054]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 13:29:43.543805 ignition[1054]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 13:29:43.543805 ignition[1054]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 13:29:43.543805 ignition[1054]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:29:43.543805 ignition[1054]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:29:43.543805 ignition[1054]: INFO : files: files passed Dec 13 13:29:43.543805 ignition[1054]: INFO : Ignition finished successfully Dec 13 13:29:43.560335 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:29:43.578853 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:29:43.584734 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:29:43.590770 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:29:43.590881 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:29:43.604899 initrd-setup-root-after-ignition[1084]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:29:43.604899 initrd-setup-root-after-ignition[1084]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:29:43.599256 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:29:43.618924 initrd-setup-root-after-ignition[1088]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:29:43.602862 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:29:43.614910 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:29:43.635716 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:29:43.635844 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:29:43.639068 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:29:43.639405 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:29:43.639962 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:29:43.642891 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:29:43.657457 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:29:43.669974 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:29:43.679031 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:29:43.683964 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:29:43.688897 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:29:43.690863 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:29:43.690966 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:29:43.699803 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:29:43.704227 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:29:43.707989 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:29:43.713075 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:29:43.717955 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:29:43.720321 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:29:43.724905 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:29:43.729597 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:29:43.736663 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:29:43.738954 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:29:43.743093 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:29:43.743228 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:29:43.751247 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:29:43.755833 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:29:43.761092 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:29:43.763282 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:29:43.768739 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:29:43.768878 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:29:43.773628 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:29:43.773784 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:29:43.777921 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:29:43.784940 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:29:43.789891 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 13:29:43.791830 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 13:29:43.801959 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:29:43.804263 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:29:43.804414 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:29:43.816524 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:29:43.818533 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:29:43.818731 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:29:43.832478 ignition[1108]: INFO : Ignition 2.20.0 Dec 13 13:29:43.832478 ignition[1108]: INFO : Stage: umount Dec 13 13:29:43.832478 ignition[1108]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:29:43.832478 ignition[1108]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:29:43.823816 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:29:43.839493 ignition[1108]: INFO : umount: umount passed Dec 13 13:29:43.839493 ignition[1108]: INFO : Ignition finished successfully Dec 13 13:29:43.824010 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:29:43.834368 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:29:43.834478 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:29:43.836171 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:29:43.836408 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:29:43.836871 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:29:43.836963 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:29:43.837203 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 13:29:43.837300 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 13:29:43.837624 systemd[1]: Stopped target network.target - Network. Dec 13 13:29:43.841370 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:29:43.841476 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:29:43.841814 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:29:43.842187 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:29:43.867164 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:29:43.869846 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:29:43.873724 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:29:43.877706 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:29:43.879573 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:29:43.907149 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:29:43.907198 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:29:43.911249 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:29:43.911296 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:29:43.915260 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:29:43.915310 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:29:43.922282 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:29:43.925952 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:29:43.930814 systemd-networkd[869]: eth0: DHCPv6 lease lost Dec 13 13:29:43.934739 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:29:43.935532 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:29:43.935639 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:29:43.946227 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:29:43.946518 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:29:43.951864 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:29:43.951947 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:29:43.957865 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:29:43.957921 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:29:43.969166 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:29:43.973124 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:29:43.973192 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:29:43.981655 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:29:43.981711 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:29:43.987787 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:29:43.987840 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:29:43.994425 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:29:43.994479 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:29:44.001737 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:29:44.022469 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:29:44.022619 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:29:44.027632 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:29:44.027714 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:29:44.032323 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:29:44.038607 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:29:44.043240 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:29:44.043299 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:29:44.046037 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:29:44.046077 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:29:44.064016 kernel: hv_netvsc 000d3ab6-bc1e-000d-3ab6-bc1e000d3ab6 eth0: Data path switched from VF: enP60803s1 Dec 13 13:29:44.049893 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:29:44.049954 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:29:44.067859 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:29:44.071566 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:29:44.071626 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:29:44.076601 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:29:44.076659 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:29:44.079980 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:29:44.080089 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:29:44.087603 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:29:44.087702 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:29:44.389216 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:29:44.389382 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:29:44.396590 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:29:44.401305 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:29:44.401381 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:29:44.413872 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:29:44.424974 systemd[1]: Switching root. Dec 13 13:29:44.495775 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Dec 13 13:29:44.495868 systemd-journald[177]: Journal stopped Dec 13 13:29:36.036955 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 13 11:52:04 -00 2024 Dec 13 13:29:36.036991 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:29:36.037005 kernel: BIOS-provided physical RAM map: Dec 13 13:29:36.037016 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 13:29:36.037027 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Dec 13 13:29:36.037037 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Dec 13 13:29:36.037051 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Dec 13 13:29:36.037062 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Dec 13 13:29:36.037076 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Dec 13 13:29:36.037087 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Dec 13 13:29:36.037098 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Dec 13 13:29:36.037108 kernel: printk: bootconsole [earlyser0] enabled Dec 13 13:29:36.037119 kernel: NX (Execute Disable) protection: active Dec 13 13:29:36.037131 kernel: APIC: Static calls initialized Dec 13 13:29:36.037147 kernel: efi: EFI v2.7 by Microsoft Dec 13 13:29:36.037160 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 RNG=0x3ffd1018 Dec 13 13:29:36.037172 kernel: random: crng init done Dec 13 13:29:36.037185 kernel: secureboot: Secure boot disabled Dec 13 13:29:36.037197 kernel: SMBIOS 3.1.0 present. Dec 13 13:29:36.037209 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Dec 13 13:29:36.037222 kernel: Hypervisor detected: Microsoft Hyper-V Dec 13 13:29:36.037234 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Dec 13 13:29:36.037246 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Dec 13 13:29:36.037259 kernel: Hyper-V: Nested features: 0x1e0101 Dec 13 13:29:36.037273 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Dec 13 13:29:36.037286 kernel: Hyper-V: Using hypercall for remote TLB flush Dec 13 13:29:36.037298 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 13 13:29:36.037311 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 13 13:29:36.037324 kernel: tsc: Marking TSC unstable due to running on Hyper-V Dec 13 13:29:36.037337 kernel: tsc: Detected 2593.903 MHz processor Dec 13 13:29:36.037350 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 13:29:36.037362 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 13:29:36.037375 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Dec 13 13:29:36.037390 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 13 13:29:36.037403 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 13:29:36.037416 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Dec 13 13:29:36.037428 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Dec 13 13:29:36.037441 kernel: Using GB pages for direct mapping Dec 13 13:29:36.037454 kernel: ACPI: Early table checksum verification disabled Dec 13 13:29:36.037467 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Dec 13 13:29:36.037484 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:29:36.037500 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:29:36.037533 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Dec 13 13:29:36.037546 kernel: ACPI: FACS 0x000000003FFFE000 000040 Dec 13 13:29:36.037560 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:29:36.037574 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:29:36.037587 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:29:36.037604 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:29:36.037616 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:29:36.037630 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:29:36.037644 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:29:36.037657 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Dec 13 13:29:36.037671 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Dec 13 13:29:36.037684 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Dec 13 13:29:36.037698 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Dec 13 13:29:36.037711 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Dec 13 13:29:36.037727 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Dec 13 13:29:36.037740 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Dec 13 13:29:36.037754 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Dec 13 13:29:36.037767 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Dec 13 13:29:36.037781 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Dec 13 13:29:36.037795 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 13:29:36.037808 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 13:29:36.037821 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 13 13:29:36.037835 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Dec 13 13:29:36.037850 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Dec 13 13:29:36.037863 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 13 13:29:36.037878 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 13 13:29:36.037891 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 13 13:29:36.037905 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 13 13:29:36.037918 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 13 13:29:36.037932 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 13 13:29:36.037945 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 13 13:29:36.037961 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 13 13:29:36.037974 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 13 13:29:36.037988 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Dec 13 13:29:36.038002 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Dec 13 13:29:36.038015 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Dec 13 13:29:36.038029 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Dec 13 13:29:36.038042 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Dec 13 13:29:36.038056 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Dec 13 13:29:36.038070 kernel: Zone ranges: Dec 13 13:29:36.038085 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 13:29:36.038099 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 13:29:36.038112 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 13:29:36.038125 kernel: Movable zone start for each node Dec 13 13:29:36.038139 kernel: Early memory node ranges Dec 13 13:29:36.038152 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 13:29:36.038166 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Dec 13 13:29:36.038179 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Dec 13 13:29:36.038192 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 13:29:36.038208 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Dec 13 13:29:36.038221 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 13:29:36.038235 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 13:29:36.038252 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Dec 13 13:29:36.038265 kernel: ACPI: PM-Timer IO Port: 0x408 Dec 13 13:29:36.038278 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Dec 13 13:29:36.038291 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Dec 13 13:29:36.038305 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 13:29:36.038319 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 13:29:36.038335 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Dec 13 13:29:36.038348 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 13:29:36.038362 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Dec 13 13:29:36.038375 kernel: Booting paravirtualized kernel on Hyper-V Dec 13 13:29:36.038389 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 13:29:36.038403 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 13:29:36.038416 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 13:29:36.038430 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 13:29:36.038443 kernel: pcpu-alloc: [0] 0 1 Dec 13 13:29:36.038459 kernel: Hyper-V: PV spinlocks enabled Dec 13 13:29:36.038472 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 13:29:36.038488 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:29:36.038503 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:29:36.038525 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 13:29:36.038536 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 13:29:36.038548 kernel: Fallback order for Node 0: 0 Dec 13 13:29:36.038560 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Dec 13 13:29:36.038574 kernel: Policy zone: Normal Dec 13 13:29:36.038696 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:29:36.038710 kernel: software IO TLB: area num 2. Dec 13 13:29:36.038730 kernel: Memory: 8075040K/8387460K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43328K init, 1748K bss, 312164K reserved, 0K cma-reserved) Dec 13 13:29:36.038741 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 13:29:36.038752 kernel: ftrace: allocating 37874 entries in 148 pages Dec 13 13:29:36.038760 kernel: ftrace: allocated 148 pages with 3 groups Dec 13 13:29:36.040847 kernel: Dynamic Preempt: voluntary Dec 13 13:29:36.040868 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:29:36.040883 kernel: rcu: RCU event tracing is enabled. Dec 13 13:29:36.040897 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 13:29:36.040917 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:29:36.040930 kernel: Rude variant of Tasks RCU enabled. Dec 13 13:29:36.040944 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:29:36.040958 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:29:36.040972 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 13:29:36.040985 kernel: Using NULL legacy PIC Dec 13 13:29:36.041001 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Dec 13 13:29:36.041015 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 13:29:36.041029 kernel: Console: colour dummy device 80x25 Dec 13 13:29:36.041042 kernel: printk: console [tty1] enabled Dec 13 13:29:36.041056 kernel: printk: console [ttyS0] enabled Dec 13 13:29:36.041070 kernel: printk: bootconsole [earlyser0] disabled Dec 13 13:29:36.041083 kernel: ACPI: Core revision 20230628 Dec 13 13:29:36.041097 kernel: Failed to register legacy timer interrupt Dec 13 13:29:36.041111 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 13:29:36.041126 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 13 13:29:36.041140 kernel: Hyper-V: Using IPI hypercalls Dec 13 13:29:36.041153 kernel: APIC: send_IPI() replaced with hv_send_ipi() Dec 13 13:29:36.041167 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Dec 13 13:29:36.041181 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Dec 13 13:29:36.041195 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Dec 13 13:29:36.041208 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Dec 13 13:29:36.041222 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Dec 13 13:29:36.041236 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.80 BogoMIPS (lpj=2593903) Dec 13 13:29:36.041252 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 13:29:36.041266 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 13:29:36.041279 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 13:29:36.041293 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 13:29:36.041306 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 13:29:36.041320 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 13:29:36.041334 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 13:29:36.041347 kernel: RETBleed: Vulnerable Dec 13 13:29:36.041360 kernel: Speculative Store Bypass: Vulnerable Dec 13 13:29:36.041374 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 13:29:36.041390 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 13:29:36.041403 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 13:29:36.041417 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 13:29:36.041430 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 13:29:36.041444 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 13:29:36.041457 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 13:29:36.041471 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 13:29:36.041484 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 13:29:36.041498 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 13:29:36.041511 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 13 13:29:36.041532 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 13 13:29:36.041548 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 13 13:29:36.041562 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Dec 13 13:29:36.041575 kernel: Freeing SMP alternatives memory: 32K Dec 13 13:29:36.041588 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:29:36.041602 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:29:36.041616 kernel: landlock: Up and running. Dec 13 13:29:36.041629 kernel: SELinux: Initializing. Dec 13 13:29:36.041643 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 13:29:36.041656 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 13:29:36.041670 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 13:29:36.041683 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:29:36.041700 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:29:36.041714 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:29:36.041727 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 13:29:36.041741 kernel: signal: max sigframe size: 3632 Dec 13 13:29:36.041755 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:29:36.041769 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:29:36.041782 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 13:29:36.041796 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:29:36.041810 kernel: smpboot: x86: Booting SMP configuration: Dec 13 13:29:36.041826 kernel: .... node #0, CPUs: #1 Dec 13 13:29:36.041840 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Dec 13 13:29:36.041855 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 13:29:36.041868 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 13:29:36.041882 kernel: smpboot: Max logical packages: 1 Dec 13 13:29:36.041895 kernel: smpboot: Total of 2 processors activated (10375.61 BogoMIPS) Dec 13 13:29:36.041909 kernel: devtmpfs: initialized Dec 13 13:29:36.041923 kernel: x86/mm: Memory block size: 128MB Dec 13 13:29:36.041936 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Dec 13 13:29:36.041953 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:29:36.041967 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 13:29:36.041981 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:29:36.041994 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:29:36.042008 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:29:36.042022 kernel: audit: type=2000 audit(1734096574.027:1): state=initialized audit_enabled=0 res=1 Dec 13 13:29:36.042035 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:29:36.042048 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 13:29:36.042062 kernel: cpuidle: using governor menu Dec 13 13:29:36.042078 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:29:36.042091 kernel: dca service started, version 1.12.1 Dec 13 13:29:36.042105 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Dec 13 13:29:36.042118 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 13:29:36.042132 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:29:36.042146 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 13:29:36.042159 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:29:36.042173 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:29:36.042189 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:29:36.042202 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:29:36.042216 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:29:36.042229 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:29:36.042243 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:29:36.042257 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 13:29:36.042270 kernel: ACPI: Interpreter enabled Dec 13 13:29:36.042284 kernel: ACPI: PM: (supports S0 S5) Dec 13 13:29:36.042297 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 13:29:36.042311 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 13:29:36.042327 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 13 13:29:36.042341 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Dec 13 13:29:36.042354 kernel: iommu: Default domain type: Translated Dec 13 13:29:36.042368 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 13:29:36.042382 kernel: efivars: Registered efivars operations Dec 13 13:29:36.042395 kernel: PCI: Using ACPI for IRQ routing Dec 13 13:29:36.042409 kernel: PCI: System does not support PCI Dec 13 13:29:36.042422 kernel: vgaarb: loaded Dec 13 13:29:36.042436 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Dec 13 13:29:36.042452 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:29:36.042465 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:29:36.042479 kernel: pnp: PnP ACPI init Dec 13 13:29:36.042493 kernel: pnp: PnP ACPI: found 3 devices Dec 13 13:29:36.042507 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 13:29:36.042527 kernel: NET: Registered PF_INET protocol family Dec 13 13:29:36.042541 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 13:29:36.042555 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 13:29:36.042569 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:29:36.042586 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 13:29:36.042599 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 13 13:29:36.042613 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 13:29:36.042625 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 13:29:36.042640 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 13:29:36.042655 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:29:36.042669 kernel: NET: Registered PF_XDP protocol family Dec 13 13:29:36.042683 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:29:36.042701 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 13:29:36.042715 kernel: software IO TLB: mapped [mem 0x000000003ae83000-0x000000003ee83000] (64MB) Dec 13 13:29:36.042730 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 13:29:36.042745 kernel: Initialise system trusted keyrings Dec 13 13:29:36.042758 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 13:29:36.042773 kernel: Key type asymmetric registered Dec 13 13:29:36.042788 kernel: Asymmetric key parser 'x509' registered Dec 13 13:29:36.042803 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 13:29:36.042817 kernel: io scheduler mq-deadline registered Dec 13 13:29:36.042833 kernel: io scheduler kyber registered Dec 13 13:29:36.042847 kernel: io scheduler bfq registered Dec 13 13:29:36.042861 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 13:29:36.042875 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:29:36.042889 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 13:29:36.042903 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 13:29:36.042918 kernel: i8042: PNP: No PS/2 controller found. Dec 13 13:29:36.043094 kernel: rtc_cmos 00:02: registered as rtc0 Dec 13 13:29:36.043223 kernel: rtc_cmos 00:02: setting system clock to 2024-12-13T13:29:35 UTC (1734096575) Dec 13 13:29:36.043335 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Dec 13 13:29:36.043353 kernel: intel_pstate: CPU model not supported Dec 13 13:29:36.043368 kernel: efifb: probing for efifb Dec 13 13:29:36.043382 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 13:29:36.043397 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 13:29:36.043411 kernel: efifb: scrolling: redraw Dec 13 13:29:36.043425 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 13:29:36.043438 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 13:29:36.043457 kernel: fb0: EFI VGA frame buffer device Dec 13 13:29:36.043470 kernel: pstore: Using crash dump compression: deflate Dec 13 13:29:36.043486 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 13:29:36.043500 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:29:36.043528 kernel: Segment Routing with IPv6 Dec 13 13:29:36.043541 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:29:36.043551 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:29:36.043559 kernel: Key type dns_resolver registered Dec 13 13:29:36.043569 kernel: IPI shorthand broadcast: enabled Dec 13 13:29:36.043586 kernel: sched_clock: Marking stable (762002900, 39881000)->(989777600, -187893700) Dec 13 13:29:36.043599 kernel: registered taskstats version 1 Dec 13 13:29:36.043613 kernel: Loading compiled-in X.509 certificates Dec 13 13:29:36.043626 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 87a680e70013684f1bdd04e047addefc714bd162' Dec 13 13:29:36.043639 kernel: Key type .fscrypt registered Dec 13 13:29:36.043652 kernel: Key type fscrypt-provisioning registered Dec 13 13:29:36.043666 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:29:36.043680 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:29:36.043694 kernel: ima: No architecture policies found Dec 13 13:29:36.043712 kernel: clk: Disabling unused clocks Dec 13 13:29:36.043727 kernel: Freeing unused kernel image (initmem) memory: 43328K Dec 13 13:29:36.043741 kernel: Write protecting the kernel read-only data: 38912k Dec 13 13:29:36.043756 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Dec 13 13:29:36.043771 kernel: Run /init as init process Dec 13 13:29:36.043786 kernel: with arguments: Dec 13 13:29:36.043799 kernel: /init Dec 13 13:29:36.043812 kernel: with environment: Dec 13 13:29:36.043823 kernel: HOME=/ Dec 13 13:29:36.043838 kernel: TERM=linux Dec 13 13:29:36.043853 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:29:36.043871 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:29:36.043889 systemd[1]: Detected virtualization microsoft. Dec 13 13:29:36.043905 systemd[1]: Detected architecture x86-64. Dec 13 13:29:36.043920 systemd[1]: Running in initrd. Dec 13 13:29:36.043936 systemd[1]: No hostname configured, using default hostname. Dec 13 13:29:36.043953 systemd[1]: Hostname set to . Dec 13 13:29:36.043973 systemd[1]: Initializing machine ID from random generator. Dec 13 13:29:36.043989 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:29:36.044006 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:29:36.044022 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:29:36.044039 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:29:36.044055 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:29:36.044071 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:29:36.044090 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:29:36.044105 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:29:36.044120 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:29:36.044135 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:29:36.044148 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:29:36.044162 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:29:36.044175 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:29:36.044190 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:29:36.044201 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:29:36.044212 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:29:36.044220 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:29:36.044232 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:29:36.044240 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:29:36.044249 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:29:36.044260 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:29:36.044271 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:29:36.044284 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:29:36.044294 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:29:36.044305 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:29:36.044313 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:29:36.044322 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:29:36.044333 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:29:36.044341 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:29:36.044350 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:29:36.044376 systemd-journald[177]: Collecting audit messages is disabled. Dec 13 13:29:36.044399 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:29:36.044408 systemd-journald[177]: Journal started Dec 13 13:29:36.044429 systemd-journald[177]: Runtime Journal (/run/log/journal/685b7dc74f914ef4802cb37f235d1f30) is 8.0M, max 158.8M, 150.8M free. Dec 13 13:29:36.055534 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:29:36.054165 systemd-modules-load[178]: Inserted module 'overlay' Dec 13 13:29:36.056733 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:29:36.060930 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:29:36.075692 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:29:36.087643 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:29:36.088104 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:29:36.091644 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:29:36.107323 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:29:36.121685 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:29:36.126428 systemd-modules-load[178]: Inserted module 'br_netfilter' Dec 13 13:29:36.128577 kernel: Bridge firewalling registered Dec 13 13:29:36.128778 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:29:36.134830 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:29:36.136128 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:29:36.140647 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:29:36.149751 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:29:36.155661 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:29:36.167701 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:29:36.173169 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:29:36.184868 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:29:36.202193 dracut-cmdline[215]: dracut-dracut-053 Dec 13 13:29:36.205999 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:29:36.223251 systemd-resolved[209]: Positive Trust Anchors: Dec 13 13:29:36.223267 systemd-resolved[209]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:29:36.223327 systemd-resolved[209]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:29:36.229628 systemd-resolved[209]: Defaulting to hostname 'linux'. Dec 13 13:29:36.230606 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:29:36.246682 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:29:36.302537 kernel: SCSI subsystem initialized Dec 13 13:29:36.312534 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:29:36.323534 kernel: iscsi: registered transport (tcp) Dec 13 13:29:36.343578 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:29:36.343627 kernel: QLogic iSCSI HBA Driver Dec 13 13:29:36.377897 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:29:36.387680 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:29:36.413634 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:29:36.413703 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:29:36.416563 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:29:36.455534 kernel: raid6: avx512x4 gen() 18963 MB/s Dec 13 13:29:36.474530 kernel: raid6: avx512x2 gen() 18763 MB/s Dec 13 13:29:36.492533 kernel: raid6: avx512x1 gen() 18626 MB/s Dec 13 13:29:36.511528 kernel: raid6: avx2x4 gen() 18697 MB/s Dec 13 13:29:36.530528 kernel: raid6: avx2x2 gen() 18679 MB/s Dec 13 13:29:36.550497 kernel: raid6: avx2x1 gen() 13930 MB/s Dec 13 13:29:36.550541 kernel: raid6: using algorithm avx512x4 gen() 18963 MB/s Dec 13 13:29:36.571110 kernel: raid6: .... xor() 7204 MB/s, rmw enabled Dec 13 13:29:36.571144 kernel: raid6: using avx512x2 recovery algorithm Dec 13 13:29:36.592538 kernel: xor: automatically using best checksumming function avx Dec 13 13:29:36.728546 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:29:36.737472 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:29:36.751655 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:29:36.767082 systemd-udevd[398]: Using default interface naming scheme 'v255'. Dec 13 13:29:36.771372 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:29:36.783736 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:29:36.799016 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Dec 13 13:29:36.823867 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:29:36.832649 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:29:36.871591 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:29:36.887823 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:29:36.911978 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:29:36.918329 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:29:36.924736 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:29:36.930274 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:29:36.940659 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:29:36.958831 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:29:36.962296 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 13:29:36.990501 kernel: hv_vmbus: Vmbus version:5.2 Dec 13 13:29:36.983042 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:29:37.010490 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 13:29:37.010526 kernel: AES CTR mode by8 optimization enabled Dec 13 13:29:36.983266 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:29:37.022814 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 13:29:37.022841 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 13:29:37.001447 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:29:37.004704 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:29:37.004886 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:29:37.005068 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:29:37.024271 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:29:37.045046 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 13:29:37.044162 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:29:37.044287 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:29:37.054300 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:29:37.072622 kernel: PTP clock support registered Dec 13 13:29:37.081115 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 13:29:37.081163 kernel: hv_vmbus: registering driver hv_utils Dec 13 13:29:37.083726 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 13:29:37.085596 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 13:29:37.087471 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 13:29:37.129698 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Dec 13 13:29:37.125426 systemd-resolved[209]: Clock change detected. Flushing caches. Dec 13 13:29:37.140809 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 13:29:37.143758 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 13:29:37.152454 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 13:29:37.153090 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:29:37.181569 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 13:29:37.181600 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Dec 13 13:29:37.181618 kernel: scsi host0: storvsc_host_t Dec 13 13:29:37.181888 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 13:29:37.181939 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 13:29:37.182100 kernel: scsi host1: storvsc_host_t Dec 13 13:29:37.182417 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:29:37.190054 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 13:29:37.208095 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 13:29:37.210356 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 13:29:37.210378 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 13:29:37.223903 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:29:37.242935 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 13:29:37.263310 kernel: hv_netvsc 000d3ab6-bc1e-000d-3ab6-bc1e000d3ab6 eth0: VF slot 1 added Dec 13 13:29:37.263508 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 13:29:37.263688 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 13:29:37.263907 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 13:29:37.264063 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 13:29:37.264215 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:29:37.264235 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 13:29:37.273769 kernel: hv_vmbus: registering driver hv_pci Dec 13 13:29:37.318925 kernel: hv_pci 53643b0f-ed83-4701-b718-704064ee98cc: PCI VMBus probing: Using version 0x10004 Dec 13 13:29:37.386144 kernel: hv_pci 53643b0f-ed83-4701-b718-704064ee98cc: PCI host bridge to bus ed83:00 Dec 13 13:29:37.386274 kernel: pci_bus ed83:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Dec 13 13:29:37.386383 kernel: pci_bus ed83:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 13:29:37.386473 kernel: pci ed83:00:02.0: [15b3:1016] type 00 class 0x020000 Dec 13 13:29:37.386590 kernel: pci ed83:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 13:29:37.386690 kernel: pci ed83:00:02.0: enabling Extended Tags Dec 13 13:29:37.386828 kernel: pci ed83:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ed83:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Dec 13 13:29:37.387321 kernel: pci_bus ed83:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 13:29:37.387476 kernel: pci ed83:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 13:29:37.544324 kernel: mlx5_core ed83:00:02.0: enabling device (0000 -> 0002) Dec 13 13:29:37.802100 kernel: mlx5_core ed83:00:02.0: firmware version: 14.30.5000 Dec 13 13:29:37.802308 kernel: hv_netvsc 000d3ab6-bc1e-000d-3ab6-bc1e000d3ab6 eth0: VF registering: eth1 Dec 13 13:29:37.802475 kernel: mlx5_core ed83:00:02.0 eth1: joined to eth0 Dec 13 13:29:37.802659 kernel: mlx5_core ed83:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Dec 13 13:29:37.730443 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 13 13:29:37.809927 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (440) Dec 13 13:29:37.809967 kernel: mlx5_core ed83:00:02.0 enP60803s1: renamed from eth1 Dec 13 13:29:37.824834 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 13 13:29:37.837511 kernel: BTRFS: device fsid 79c74448-2326-4c98-b9ff-09542b30ea52 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (454) Dec 13 13:29:37.848727 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 13 13:29:37.860461 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 13 13:29:37.863236 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Dec 13 13:29:37.876886 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:29:37.887775 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:29:38.903782 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:29:38.903873 disk-uuid[599]: The operation has completed successfully. Dec 13 13:29:38.974953 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:29:38.975064 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:29:39.004878 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:29:39.012183 sh[685]: Success Dec 13 13:29:39.044258 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 13:29:39.241904 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:29:39.254667 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:29:39.258552 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:29:39.281763 kernel: BTRFS info (device dm-0): first mount of filesystem 79c74448-2326-4c98-b9ff-09542b30ea52 Dec 13 13:29:39.281804 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:29:39.286929 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:29:39.289421 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:29:39.291708 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:29:39.529440 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:29:39.534551 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 13:29:39.542899 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:29:39.549921 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:29:39.564738 kernel: BTRFS info (device sda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:29:39.564785 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:29:39.567675 kernel: BTRFS info (device sda6): using free space tree Dec 13 13:29:39.583787 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 13:29:39.597758 kernel: BTRFS info (device sda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:29:39.597231 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:29:39.605153 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:29:39.617330 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:29:39.655624 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:29:39.666375 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:29:39.685638 systemd-networkd[869]: lo: Link UP Dec 13 13:29:39.685647 systemd-networkd[869]: lo: Gained carrier Dec 13 13:29:39.688051 systemd-networkd[869]: Enumeration completed Dec 13 13:29:39.688466 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:29:39.695544 systemd[1]: Reached target network.target - Network. Dec 13 13:29:39.695799 systemd-networkd[869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:29:39.695804 systemd-networkd[869]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:29:39.761768 kernel: mlx5_core ed83:00:02.0 enP60803s1: Link up Dec 13 13:29:39.795085 kernel: hv_netvsc 000d3ab6-bc1e-000d-3ab6-bc1e000d3ab6 eth0: Data path switched to VF: enP60803s1 Dec 13 13:29:39.794650 systemd-networkd[869]: enP60803s1: Link UP Dec 13 13:29:39.794806 systemd-networkd[869]: eth0: Link UP Dec 13 13:29:39.794968 systemd-networkd[869]: eth0: Gained carrier Dec 13 13:29:39.794980 systemd-networkd[869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:29:39.811171 systemd-networkd[869]: enP60803s1: Gained carrier Dec 13 13:29:39.837797 systemd-networkd[869]: eth0: DHCPv4 address 10.200.8.33/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 13:29:40.373884 ignition[804]: Ignition 2.20.0 Dec 13 13:29:40.373897 ignition[804]: Stage: fetch-offline Dec 13 13:29:40.373944 ignition[804]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:29:40.373956 ignition[804]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:29:40.374076 ignition[804]: parsed url from cmdline: "" Dec 13 13:29:40.374082 ignition[804]: no config URL provided Dec 13 13:29:40.374088 ignition[804]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:29:40.374098 ignition[804]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:29:40.374105 ignition[804]: failed to fetch config: resource requires networking Dec 13 13:29:40.374418 ignition[804]: Ignition finished successfully Dec 13 13:29:40.391585 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:29:40.401932 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 13:29:40.413588 ignition[877]: Ignition 2.20.0 Dec 13 13:29:40.413600 ignition[877]: Stage: fetch Dec 13 13:29:40.413808 ignition[877]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:29:40.413821 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:29:40.415348 ignition[877]: parsed url from cmdline: "" Dec 13 13:29:40.415354 ignition[877]: no config URL provided Dec 13 13:29:40.415361 ignition[877]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:29:40.415372 ignition[877]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:29:40.416649 ignition[877]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 13:29:40.503181 ignition[877]: GET result: OK Dec 13 13:29:40.503270 ignition[877]: config has been read from IMDS userdata Dec 13 13:29:40.503311 ignition[877]: parsing config with SHA512: f71caf7c23ea2f1ce71369a041be3f8e4f68f7c00ba62e50244733c16dfa6c8ebf291981fa39a0ed22b5c308e0e7865b6829c2136c71d017449ea12403a3f764 Dec 13 13:29:40.512669 unknown[877]: fetched base config from "system" Dec 13 13:29:40.512683 unknown[877]: fetched base config from "system" Dec 13 13:29:40.513195 ignition[877]: fetch: fetch complete Dec 13 13:29:40.512692 unknown[877]: fetched user config from "azure" Dec 13 13:29:40.513200 ignition[877]: fetch: fetch passed Dec 13 13:29:40.516376 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 13:29:40.513241 ignition[877]: Ignition finished successfully Dec 13 13:29:40.529403 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:29:40.541724 ignition[883]: Ignition 2.20.0 Dec 13 13:29:40.541735 ignition[883]: Stage: kargs Dec 13 13:29:40.541979 ignition[883]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:29:40.541989 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:29:40.549693 ignition[883]: kargs: kargs passed Dec 13 13:29:40.549764 ignition[883]: Ignition finished successfully Dec 13 13:29:40.553146 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:29:40.562898 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:29:40.574529 ignition[889]: Ignition 2.20.0 Dec 13 13:29:40.574538 ignition[889]: Stage: disks Dec 13 13:29:40.577073 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:29:40.574742 ignition[889]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:29:40.580364 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:29:40.574780 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:29:40.584809 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:29:40.575587 ignition[889]: disks: disks passed Dec 13 13:29:40.587511 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:29:40.575628 ignition[889]: Ignition finished successfully Dec 13 13:29:40.591739 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:29:40.594039 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:29:40.607442 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:29:40.664115 systemd-fsck[897]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Dec 13 13:29:40.670868 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:29:40.682893 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:29:40.783765 kernel: EXT4-fs (sda9): mounted filesystem 8801d4fe-2f40-4e12-9140-c192f2e7d668 r/w with ordered data mode. Quota mode: none. Dec 13 13:29:40.784249 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:29:40.788693 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:29:40.823875 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:29:40.829029 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:29:40.835904 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 13:29:40.860924 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (908) Dec 13 13:29:40.860951 kernel: BTRFS info (device sda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:29:40.860966 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:29:40.860977 kernel: BTRFS info (device sda6): using free space tree Dec 13 13:29:40.860990 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 13:29:40.856272 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:29:40.856307 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:29:40.881033 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:29:40.885985 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:29:40.896148 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:29:41.117077 systemd-networkd[869]: enP60803s1: Gained IPv6LL Dec 13 13:29:41.479079 initrd-setup-root[933]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:29:41.509175 initrd-setup-root[940]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:29:41.515374 initrd-setup-root[947]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:29:41.521072 initrd-setup-root[954]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:29:41.542696 coreos-metadata[910]: Dec 13 13:29:41.542 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 13:29:41.548435 coreos-metadata[910]: Dec 13 13:29:41.548 INFO Fetch successful Dec 13 13:29:41.550573 coreos-metadata[910]: Dec 13 13:29:41.548 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 13:29:41.567441 coreos-metadata[910]: Dec 13 13:29:41.567 INFO Fetch successful Dec 13 13:29:41.569529 coreos-metadata[910]: Dec 13 13:29:41.568 INFO wrote hostname ci-4186.0.0-a-6a956dd616 to /sysroot/etc/hostname Dec 13 13:29:41.574655 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 13:29:41.629124 systemd-networkd[869]: eth0: Gained IPv6LL Dec 13 13:29:42.213974 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:29:42.223981 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:29:42.230924 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:29:42.238764 kernel: BTRFS info (device sda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:29:42.238528 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:29:42.266383 ignition[1026]: INFO : Ignition 2.20.0 Dec 13 13:29:42.271915 ignition[1026]: INFO : Stage: mount Dec 13 13:29:42.271915 ignition[1026]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:29:42.271915 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:29:42.271915 ignition[1026]: INFO : mount: mount passed Dec 13 13:29:42.271915 ignition[1026]: INFO : Ignition finished successfully Dec 13 13:29:42.271156 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:29:42.274342 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:29:42.290871 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:29:42.297637 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:29:42.321195 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1038) Dec 13 13:29:42.321241 kernel: BTRFS info (device sda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:29:42.324659 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:29:42.326915 kernel: BTRFS info (device sda6): using free space tree Dec 13 13:29:42.331763 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 13:29:42.332961 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:29:42.356175 ignition[1054]: INFO : Ignition 2.20.0 Dec 13 13:29:42.356175 ignition[1054]: INFO : Stage: files Dec 13 13:29:42.360171 ignition[1054]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:29:42.360171 ignition[1054]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:29:42.360171 ignition[1054]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:29:42.382948 ignition[1054]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:29:42.382948 ignition[1054]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:29:42.460434 ignition[1054]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:29:42.463966 ignition[1054]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:29:42.463966 ignition[1054]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:29:42.460961 unknown[1054]: wrote ssh authorized keys file for user: core Dec 13 13:29:42.474003 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:29:42.478206 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 13:29:42.537620 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 13:29:42.666920 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:29:42.672024 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:29:42.672024 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:29:42.672024 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:29:42.672024 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:29:42.672024 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:29:42.672024 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:29:42.672024 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:29:42.672024 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:29:42.672024 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:29:42.672024 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:29:42.672024 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 13:29:42.672024 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 13:29:42.672024 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 13:29:42.672024 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 13:29:43.195548 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 13:29:43.524460 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 13:29:43.524460 ignition[1054]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 13:29:43.538663 ignition[1054]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:29:43.543805 ignition[1054]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:29:43.543805 ignition[1054]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 13:29:43.543805 ignition[1054]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 13:29:43.543805 ignition[1054]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 13:29:43.543805 ignition[1054]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:29:43.543805 ignition[1054]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:29:43.543805 ignition[1054]: INFO : files: files passed Dec 13 13:29:43.543805 ignition[1054]: INFO : Ignition finished successfully Dec 13 13:29:43.560335 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:29:43.578853 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:29:43.584734 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:29:43.590770 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:29:43.590881 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:29:43.604899 initrd-setup-root-after-ignition[1084]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:29:43.604899 initrd-setup-root-after-ignition[1084]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:29:43.599256 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:29:43.618924 initrd-setup-root-after-ignition[1088]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:29:43.602862 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:29:43.614910 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:29:43.635716 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:29:43.635844 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:29:43.639068 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:29:43.639405 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:29:43.639962 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:29:43.642891 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:29:43.657457 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:29:43.669974 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:29:43.679031 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:29:43.683964 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:29:43.688897 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:29:43.690863 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:29:43.690966 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:29:43.699803 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:29:43.704227 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:29:43.707989 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:29:43.713075 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:29:43.717955 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:29:43.720321 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:29:43.724905 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:29:43.729597 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:29:43.736663 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:29:43.738954 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:29:43.743093 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:29:43.743228 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:29:43.751247 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:29:43.755833 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:29:43.761092 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:29:43.763282 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:29:43.768739 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:29:43.768878 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:29:43.773628 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:29:43.773784 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:29:43.777921 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:29:43.784940 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:29:43.789891 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 13:29:43.791830 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 13:29:43.801959 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:29:43.804263 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:29:43.804414 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:29:43.816524 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:29:43.818533 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:29:43.818731 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:29:43.832478 ignition[1108]: INFO : Ignition 2.20.0 Dec 13 13:29:43.832478 ignition[1108]: INFO : Stage: umount Dec 13 13:29:43.832478 ignition[1108]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:29:43.832478 ignition[1108]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:29:43.823816 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:29:43.839493 ignition[1108]: INFO : umount: umount passed Dec 13 13:29:43.839493 ignition[1108]: INFO : Ignition finished successfully Dec 13 13:29:43.824010 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:29:43.834368 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:29:43.834478 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:29:43.836171 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:29:43.836408 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:29:43.836871 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:29:43.836963 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:29:43.837203 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 13:29:43.837300 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 13:29:43.837624 systemd[1]: Stopped target network.target - Network. Dec 13 13:29:43.841370 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:29:43.841476 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:29:43.841814 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:29:43.842187 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:29:43.867164 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:29:43.869846 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:29:43.873724 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:29:43.877706 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:29:43.879573 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:29:43.907149 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:29:43.907198 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:29:43.911249 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:29:43.911296 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:29:43.915260 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:29:43.915310 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:29:43.922282 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:29:43.925952 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:29:43.930814 systemd-networkd[869]: eth0: DHCPv6 lease lost Dec 13 13:29:43.934739 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:29:43.935532 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:29:43.935639 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:29:43.946227 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:29:43.946518 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:29:43.951864 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:29:43.951947 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:29:43.957865 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:29:43.957921 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:29:43.969166 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:29:43.973124 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:29:43.973192 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:29:43.981655 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:29:43.981711 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:29:43.987787 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:29:43.987840 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:29:43.994425 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:29:43.994479 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:29:44.001737 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:29:44.022469 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:29:44.022619 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:29:44.027632 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:29:44.027714 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:29:44.032323 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:29:44.038607 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:29:44.043240 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:29:44.043299 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:29:44.046037 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:29:44.046077 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:29:44.064016 kernel: hv_netvsc 000d3ab6-bc1e-000d-3ab6-bc1e000d3ab6 eth0: Data path switched from VF: enP60803s1 Dec 13 13:29:44.049893 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:29:44.049954 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:29:44.067859 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:29:44.071566 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:29:44.071626 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:29:44.076601 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:29:44.076659 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:29:44.079980 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:29:44.080089 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:29:44.087603 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:29:44.087702 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:29:44.389216 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:29:44.389382 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:29:44.396590 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:29:44.401305 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:29:44.401381 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:29:44.413872 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:29:44.424974 systemd[1]: Switching root. Dec 13 13:29:44.495775 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Dec 13 13:29:44.495868 systemd-journald[177]: Journal stopped Dec 13 13:29:49.161766 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 13:29:49.161829 kernel: SELinux: policy capability open_perms=1 Dec 13 13:29:49.161847 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 13:29:49.161861 kernel: SELinux: policy capability always_check_network=0 Dec 13 13:29:49.161876 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 13:29:49.161890 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 13:29:49.161906 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 13:29:49.161921 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 13:29:49.161937 kernel: audit: type=1403 audit(1734096585.855:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 13:29:49.161954 systemd[1]: Successfully loaded SELinux policy in 169.891ms. Dec 13 13:29:49.161972 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.844ms. Dec 13 13:29:49.161989 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:29:49.162005 systemd[1]: Detected virtualization microsoft. Dec 13 13:29:49.162020 systemd[1]: Detected architecture x86-64. Dec 13 13:29:49.162040 systemd[1]: Detected first boot. Dec 13 13:29:49.162057 systemd[1]: Hostname set to . Dec 13 13:29:49.162073 systemd[1]: Initializing machine ID from random generator. Dec 13 13:29:49.162089 zram_generator::config[1151]: No configuration found. Dec 13 13:29:49.162106 systemd[1]: Populated /etc with preset unit settings. Dec 13 13:29:49.162124 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 13:29:49.162140 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 13:29:49.162156 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 13:29:49.162173 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 13:29:49.162189 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 13:29:49.162206 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 13:29:49.162223 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 13:29:49.162242 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 13:29:49.162259 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 13:29:49.162276 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 13:29:49.162292 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 13:29:49.162308 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:29:49.162325 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:29:49.162341 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 13:29:49.162357 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 13:29:49.162376 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 13:29:49.162393 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:29:49.162409 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 13:29:49.162426 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:29:49.162442 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 13:29:49.162459 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 13:29:49.162480 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 13:29:49.162497 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 13:29:49.162514 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:29:49.162533 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:29:49.162550 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:29:49.162567 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:29:49.162585 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 13:29:49.162602 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 13:29:49.162619 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:29:49.162636 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:29:49.162656 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:29:49.162674 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 13:29:49.162691 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 13:29:49.162708 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 13:29:49.162725 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 13:29:49.162753 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:29:49.162771 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 13:29:49.162789 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 13:29:49.162806 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 13:29:49.162824 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 13:29:49.162842 systemd[1]: Reached target machines.target - Containers. Dec 13 13:29:49.162859 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 13:29:49.162876 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:29:49.162897 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:29:49.162914 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 13:29:49.162932 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:29:49.162949 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:29:49.162969 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:29:49.162986 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 13:29:49.163003 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:29:49.163021 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 13:29:49.163041 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 13:29:49.163059 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 13:29:49.163076 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 13:29:49.163093 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 13:29:49.163110 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:29:49.163127 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:29:49.163144 kernel: loop: module loaded Dec 13 13:29:49.163160 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 13:29:49.163177 kernel: fuse: init (API version 7.39) Dec 13 13:29:49.163207 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 13:29:49.163224 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:29:49.163240 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 13:29:49.163257 systemd[1]: Stopped verity-setup.service. Dec 13 13:29:49.163274 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:29:49.163316 systemd-journald[1250]: Collecting audit messages is disabled. Dec 13 13:29:49.163354 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 13:29:49.163371 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 13:29:49.163389 systemd-journald[1250]: Journal started Dec 13 13:29:49.163423 systemd-journald[1250]: Runtime Journal (/run/log/journal/49c523d375e14860868ecbc9612b558c) is 8.0M, max 158.8M, 150.8M free. Dec 13 13:29:48.547113 systemd[1]: Queued start job for default target multi-user.target. Dec 13 13:29:48.644155 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 13:29:48.644548 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 13:29:49.169777 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:29:49.176141 kernel: ACPI: bus type drm_connector registered Dec 13 13:29:49.175433 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 13:29:49.178198 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 13:29:49.180867 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 13:29:49.183802 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 13:29:49.186191 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 13:29:49.188958 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:29:49.191867 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 13:29:49.191996 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 13:29:49.194933 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:29:49.195060 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:29:49.198010 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:29:49.198135 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:29:49.200892 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:29:49.201043 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:29:49.204631 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 13:29:49.204838 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 13:29:49.207731 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:29:49.208032 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:29:49.211244 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:29:49.214324 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 13:29:49.217737 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 13:29:49.235437 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 13:29:49.245819 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 13:29:49.256834 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 13:29:49.262093 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 13:29:49.262139 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:29:49.266226 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 13:29:49.274903 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 13:29:49.283895 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 13:29:49.286578 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:29:49.287854 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 13:29:49.293908 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 13:29:49.296923 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:29:49.300591 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 13:29:49.303233 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:29:49.309319 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:29:49.318917 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 13:29:49.321086 systemd-journald[1250]: Time spent on flushing to /var/log/journal/49c523d375e14860868ecbc9612b558c is 37.109ms for 953 entries. Dec 13 13:29:49.321086 systemd-journald[1250]: System Journal (/var/log/journal/49c523d375e14860868ecbc9612b558c) is 8.0M, max 2.6G, 2.6G free. Dec 13 13:29:49.392072 systemd-journald[1250]: Received client request to flush runtime journal. Dec 13 13:29:49.325138 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 13:29:49.332830 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:29:49.336126 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 13:29:49.341300 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 13:29:49.344414 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 13:29:49.358985 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 13:29:49.382174 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 13:29:49.385134 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 13:29:49.395871 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 13:29:49.399044 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 13:29:49.406684 udevadm[1295]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 13:29:49.412771 kernel: loop0: detected capacity change from 0 to 138184 Dec 13 13:29:49.413823 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:29:49.438370 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 13:29:49.439657 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 13:29:49.561475 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 13:29:49.571062 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:29:49.625440 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Dec 13 13:29:49.625462 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Dec 13 13:29:49.629205 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:29:49.772394 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 13:29:49.812778 kernel: loop1: detected capacity change from 0 to 141000 Dec 13 13:29:50.208771 kernel: loop2: detected capacity change from 0 to 210664 Dec 13 13:29:50.247772 kernel: loop3: detected capacity change from 0 to 28304 Dec 13 13:29:50.426279 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 13:29:50.433090 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:29:50.464277 systemd-udevd[1313]: Using default interface naming scheme 'v255'. Dec 13 13:29:50.632778 kernel: loop4: detected capacity change from 0 to 138184 Dec 13 13:29:50.645954 kernel: loop5: detected capacity change from 0 to 141000 Dec 13 13:29:50.658202 kernel: loop6: detected capacity change from 0 to 210664 Dec 13 13:29:50.665780 kernel: loop7: detected capacity change from 0 to 28304 Dec 13 13:29:50.669940 (sd-merge)[1315]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Dec 13 13:29:50.670400 (sd-merge)[1315]: Merged extensions into '/usr'. Dec 13 13:29:50.674301 systemd[1]: Reloading requested from client PID 1287 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 13:29:50.674316 systemd[1]: Reloading... Dec 13 13:29:50.723764 zram_generator::config[1337]: No configuration found. Dec 13 13:29:50.943850 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1366) Dec 13 13:29:50.986943 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 13:29:50.999490 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1366) Dec 13 13:29:51.003041 kernel: hv_vmbus: registering driver hyperv_fb Dec 13 13:29:51.009457 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 13 13:29:51.009528 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 13 13:29:51.015613 kernel: Console: switching to colour dummy device 80x25 Dec 13 13:29:51.019774 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 13:29:51.019829 kernel: hv_vmbus: registering driver hv_balloon Dec 13 13:29:51.023797 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 13 13:29:51.176730 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:29:51.224831 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1372) Dec 13 13:29:51.381090 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 13:29:51.381666 systemd[1]: Reloading finished in 706 ms. Dec 13 13:29:51.472618 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:29:51.476636 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 13:29:51.511065 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 13 13:29:51.531762 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Dec 13 13:29:51.546525 systemd[1]: Starting ensure-sysext.service... Dec 13 13:29:51.552130 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 13:29:51.560040 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:29:51.567838 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:29:51.577002 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:29:51.591131 systemd[1]: Reloading requested from client PID 1497 ('systemctl') (unit ensure-sysext.service)... Dec 13 13:29:51.591145 systemd[1]: Reloading... Dec 13 13:29:51.615615 systemd-tmpfiles[1500]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 13:29:51.616643 systemd-tmpfiles[1500]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 13:29:51.618946 systemd-tmpfiles[1500]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 13:29:51.619341 systemd-tmpfiles[1500]: ACLs are not supported, ignoring. Dec 13 13:29:51.619411 systemd-tmpfiles[1500]: ACLs are not supported, ignoring. Dec 13 13:29:51.649492 systemd-tmpfiles[1500]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:29:51.649509 systemd-tmpfiles[1500]: Skipping /boot Dec 13 13:29:51.679869 systemd-tmpfiles[1500]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:29:51.679888 systemd-tmpfiles[1500]: Skipping /boot Dec 13 13:29:51.691768 zram_generator::config[1534]: No configuration found. Dec 13 13:29:51.825280 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:29:51.921494 systemd[1]: Reloading finished in 329 ms. Dec 13 13:29:51.938456 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 13:29:51.947269 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 13:29:51.951026 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:29:51.954659 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:29:51.966714 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:29:51.969992 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:29:51.974873 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 13:29:51.977659 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:29:51.982838 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 13:29:51.992291 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:29:51.995878 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:29:52.001843 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:29:52.004391 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:29:52.012045 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 13:29:52.024020 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:29:52.033437 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 13:29:52.040886 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 13:29:52.043501 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:29:52.049543 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:29:52.049702 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:29:52.060077 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:29:52.060277 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:29:52.063994 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:29:52.064169 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:29:52.070808 lvm[1601]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:29:52.078960 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:29:52.079290 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:29:52.089996 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:29:52.101850 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:29:52.113009 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:29:52.116797 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:29:52.116982 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:29:52.118137 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 13:29:52.127402 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:29:52.127651 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:29:52.130916 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:29:52.131046 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:29:52.134181 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:29:52.134342 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:29:52.154251 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:29:52.154647 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:29:52.159115 augenrules[1639]: No rules Dec 13 13:29:52.161086 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:29:52.164951 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:29:52.170840 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:29:52.176881 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:29:52.180823 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:29:52.181074 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 13:29:52.186091 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:29:52.193048 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 13:29:52.196354 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:29:52.196596 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:29:52.199587 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 13:29:52.211011 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 13:29:52.214996 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:29:52.215801 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:29:52.222952 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:29:52.223140 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:29:52.227544 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:29:52.227724 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:29:52.231066 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:29:52.231244 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:29:52.236022 systemd[1]: Finished ensure-sysext.service. Dec 13 13:29:52.249674 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:29:52.258909 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 13:29:52.261521 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:29:52.261602 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:29:52.264657 lvm[1661]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:29:52.313505 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 13:29:52.337978 systemd-resolved[1611]: Positive Trust Anchors: Dec 13 13:29:52.338003 systemd-resolved[1611]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:29:52.338049 systemd-resolved[1611]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:29:52.339051 systemd-networkd[1499]: lo: Link UP Dec 13 13:29:52.339348 systemd-networkd[1499]: lo: Gained carrier Dec 13 13:29:52.342943 systemd-networkd[1499]: Enumeration completed Dec 13 13:29:52.343087 systemd-resolved[1611]: Using system hostname 'ci-4186.0.0-a-6a956dd616'. Dec 13 13:29:52.343388 systemd-networkd[1499]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:29:52.343393 systemd-networkd[1499]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:29:52.343987 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:29:52.346955 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:29:52.349734 systemd[1]: Reached target network.target - Network. Dec 13 13:29:52.351783 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:29:52.360920 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 13:29:52.396765 kernel: mlx5_core ed83:00:02.0 enP60803s1: Link up Dec 13 13:29:52.416776 kernel: hv_netvsc 000d3ab6-bc1e-000d-3ab6-bc1e000d3ab6 eth0: Data path switched to VF: enP60803s1 Dec 13 13:29:52.418048 systemd-networkd[1499]: enP60803s1: Link UP Dec 13 13:29:52.418179 systemd-networkd[1499]: eth0: Link UP Dec 13 13:29:52.418183 systemd-networkd[1499]: eth0: Gained carrier Dec 13 13:29:52.418201 systemd-networkd[1499]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:29:52.422061 systemd-networkd[1499]: enP60803s1: Gained carrier Dec 13 13:29:52.440904 systemd-networkd[1499]: eth0: DHCPv4 address 10.200.8.33/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 13:29:52.506567 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 13:29:52.509925 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:29:53.789344 systemd-networkd[1499]: eth0: Gained IPv6LL Dec 13 13:29:53.790014 systemd-networkd[1499]: enP60803s1: Gained IPv6LL Dec 13 13:29:53.792963 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 13:29:53.796688 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 13:29:54.507810 ldconfig[1282]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 13:29:54.518552 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 13:29:54.526921 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 13:29:54.544326 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 13:29:54.547560 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:29:54.549969 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 13:29:54.552573 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 13:29:54.555472 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 13:29:54.557873 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 13:29:54.560460 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 13:29:54.563097 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 13:29:54.563139 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:29:54.565253 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:29:54.568022 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 13:29:54.571503 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 13:29:54.579404 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 13:29:54.582381 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 13:29:54.584702 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:29:54.586837 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:29:54.589048 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:29:54.589085 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:29:54.591562 systemd[1]: Starting chronyd.service - NTP client/server... Dec 13 13:29:54.595874 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 13:29:54.606875 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 13:29:54.613678 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 13:29:54.619102 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 13:29:54.625757 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 13:29:54.631103 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 13:29:54.631155 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Dec 13 13:29:54.634920 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 13 13:29:54.640245 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 13 13:29:54.647887 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:29:54.651814 jq[1679]: false Dec 13 13:29:54.653047 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 13:29:54.657918 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 13:29:54.668895 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 13:29:54.672295 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 13:29:54.681110 (chronyd)[1672]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Dec 13 13:29:54.683948 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 13:29:54.688292 chronyd[1690]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Dec 13 13:29:54.690016 KVP[1681]: KVP starting; pid is:1681 Dec 13 13:29:54.691936 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 13:29:54.695459 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 13:29:54.696978 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 13:29:54.705207 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 13:29:54.712885 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 13:29:54.719301 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 13:29:54.721339 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 13:29:54.724321 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 13:29:54.724534 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 13:29:54.733795 kernel: hv_utils: KVP IC version 4.0 Dec 13 13:29:54.733871 KVP[1681]: KVP LIC Version: 3.1 Dec 13 13:29:54.736794 chronyd[1690]: Timezone right/UTC failed leap second check, ignoring Dec 13 13:29:54.739642 systemd[1]: Started chronyd.service - NTP client/server. Dec 13 13:29:54.737095 chronyd[1690]: Loaded seccomp filter (level 2) Dec 13 13:29:54.745632 extend-filesystems[1680]: Found loop4 Dec 13 13:29:54.751301 extend-filesystems[1680]: Found loop5 Dec 13 13:29:54.751301 extend-filesystems[1680]: Found loop6 Dec 13 13:29:54.751301 extend-filesystems[1680]: Found loop7 Dec 13 13:29:54.751301 extend-filesystems[1680]: Found sda Dec 13 13:29:54.751301 extend-filesystems[1680]: Found sda1 Dec 13 13:29:54.751301 extend-filesystems[1680]: Found sda2 Dec 13 13:29:54.751301 extend-filesystems[1680]: Found sda3 Dec 13 13:29:54.751301 extend-filesystems[1680]: Found usr Dec 13 13:29:54.751301 extend-filesystems[1680]: Found sda4 Dec 13 13:29:54.751301 extend-filesystems[1680]: Found sda6 Dec 13 13:29:54.751301 extend-filesystems[1680]: Found sda7 Dec 13 13:29:54.751301 extend-filesystems[1680]: Found sda9 Dec 13 13:29:54.751301 extend-filesystems[1680]: Checking size of /dev/sda9 Dec 13 13:29:54.795941 update_engine[1692]: I20241213 13:29:54.773725 1692 main.cc:92] Flatcar Update Engine starting Dec 13 13:29:54.802280 jq[1694]: true Dec 13 13:29:54.803602 (ntainerd)[1709]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 13:29:54.808282 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 13:29:54.808496 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 13:29:54.832982 extend-filesystems[1680]: Old size kept for /dev/sda9 Dec 13 13:29:54.837042 extend-filesystems[1680]: Found sr0 Dec 13 13:29:54.836490 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 13:29:54.836687 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 13:29:54.859205 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 13:29:54.868525 tar[1699]: linux-amd64/helm Dec 13 13:29:54.873102 dbus-daemon[1676]: [system] SELinux support is enabled Dec 13 13:29:54.875487 jq[1724]: true Dec 13 13:29:54.875559 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 13:29:54.885646 update_engine[1692]: I20241213 13:29:54.885414 1692 update_check_scheduler.cc:74] Next update check in 11m30s Dec 13 13:29:54.890517 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 13:29:54.890555 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 13:29:54.895247 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 13:29:54.895271 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 13:29:54.902852 systemd[1]: Started update-engine.service - Update Engine. Dec 13 13:29:54.913032 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 13:29:54.933506 systemd-logind[1691]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 13:29:54.934584 systemd-logind[1691]: New seat seat0. Dec 13 13:29:54.936266 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 13:29:55.030919 coreos-metadata[1674]: Dec 13 13:29:55.030 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 13:29:55.043778 coreos-metadata[1674]: Dec 13 13:29:55.043 INFO Fetch successful Dec 13 13:29:55.046831 coreos-metadata[1674]: Dec 13 13:29:55.044 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 13 13:29:55.053347 bash[1757]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:29:55.050299 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 13:29:55.054191 coreos-metadata[1674]: Dec 13 13:29:55.053 INFO Fetch successful Dec 13 13:29:55.055914 coreos-metadata[1674]: Dec 13 13:29:55.055 INFO Fetching http://168.63.129.16/machine/2c7d2836-9bf6-48cf-a2f8-43d305fc6ab3/6e9c14c3%2D8122%2D4272%2Dae60%2D4e9b28680e9e.%5Fci%2D4186.0.0%2Da%2D6a956dd616?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 13 13:29:55.058096 coreos-metadata[1674]: Dec 13 13:29:55.056 INFO Fetch successful Dec 13 13:29:55.057585 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 13:29:55.063608 coreos-metadata[1674]: Dec 13 13:29:55.062 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 13 13:29:55.087809 coreos-metadata[1674]: Dec 13 13:29:55.083 INFO Fetch successful Dec 13 13:29:55.090838 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1730) Dec 13 13:29:55.113115 sshd_keygen[1718]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 13:29:55.155679 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 13:29:55.162110 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 13:29:55.191374 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 13:29:55.204016 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 13:29:55.209985 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 13 13:29:55.225526 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 13:29:55.225957 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 13:29:55.261633 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 13:29:55.303632 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 13 13:29:55.317458 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 13:29:55.321075 locksmithd[1735]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 13:29:55.342275 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 13:29:55.361282 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 13:29:55.364956 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 13:29:55.775058 tar[1699]: linux-amd64/LICENSE Dec 13 13:29:55.775200 tar[1699]: linux-amd64/README.md Dec 13 13:29:55.785248 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 13:29:56.059111 containerd[1709]: time="2024-12-13T13:29:56.058953200Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 13:29:56.100049 containerd[1709]: time="2024-12-13T13:29:56.098716800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:29:56.103995 containerd[1709]: time="2024-12-13T13:29:56.103953500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:29:56.104552 containerd[1709]: time="2024-12-13T13:29:56.104524700Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 13:29:56.104703 containerd[1709]: time="2024-12-13T13:29:56.104687100Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 13:29:56.104988 containerd[1709]: time="2024-12-13T13:29:56.104965100Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 13:29:56.105454 containerd[1709]: time="2024-12-13T13:29:56.105431100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 13:29:56.105640 containerd[1709]: time="2024-12-13T13:29:56.105616400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:29:56.106779 containerd[1709]: time="2024-12-13T13:29:56.105738000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:29:56.106779 containerd[1709]: time="2024-12-13T13:29:56.106068400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:29:56.106779 containerd[1709]: time="2024-12-13T13:29:56.106091600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 13:29:56.106779 containerd[1709]: time="2024-12-13T13:29:56.106111400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:29:56.106779 containerd[1709]: time="2024-12-13T13:29:56.106126800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 13:29:56.106779 containerd[1709]: time="2024-12-13T13:29:56.106252600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:29:56.106779 containerd[1709]: time="2024-12-13T13:29:56.106508200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:29:56.106779 containerd[1709]: time="2024-12-13T13:29:56.106656200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:29:56.106779 containerd[1709]: time="2024-12-13T13:29:56.106674300Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 13:29:56.107517 containerd[1709]: time="2024-12-13T13:29:56.107232500Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 13:29:56.107517 containerd[1709]: time="2024-12-13T13:29:56.107317300Z" level=info msg="metadata content store policy set" policy=shared Dec 13 13:29:56.140206 containerd[1709]: time="2024-12-13T13:29:56.140172900Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 13:29:56.140800 containerd[1709]: time="2024-12-13T13:29:56.140341700Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 13:29:56.140800 containerd[1709]: time="2024-12-13T13:29:56.140381500Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 13:29:56.140800 containerd[1709]: time="2024-12-13T13:29:56.140407200Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 13:29:56.140800 containerd[1709]: time="2024-12-13T13:29:56.140447100Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 13:29:56.140800 containerd[1709]: time="2024-12-13T13:29:56.140637800Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 13:29:56.141349 containerd[1709]: time="2024-12-13T13:29:56.141315500Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 13:29:56.141586 containerd[1709]: time="2024-12-13T13:29:56.141561300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 13:29:56.141681 containerd[1709]: time="2024-12-13T13:29:56.141667700Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 13:29:56.141782 containerd[1709]: time="2024-12-13T13:29:56.141755500Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 13:29:56.141927 containerd[1709]: time="2024-12-13T13:29:56.141829300Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 13:29:56.141927 containerd[1709]: time="2024-12-13T13:29:56.141848200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 13:29:56.141927 containerd[1709]: time="2024-12-13T13:29:56.141865000Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 13:29:56.142112 containerd[1709]: time="2024-12-13T13:29:56.142036400Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 13:29:56.142112 containerd[1709]: time="2024-12-13T13:29:56.142061600Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 13:29:56.142112 containerd[1709]: time="2024-12-13T13:29:56.142082600Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 13:29:56.142307 containerd[1709]: time="2024-12-13T13:29:56.142097600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 13:29:56.142307 containerd[1709]: time="2024-12-13T13:29:56.142206500Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 13:29:56.142307 containerd[1709]: time="2024-12-13T13:29:56.142241300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 13:29:56.142567 containerd[1709]: time="2024-12-13T13:29:56.142419900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 13:29:56.142567 containerd[1709]: time="2024-12-13T13:29:56.142446500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 13:29:56.142567 containerd[1709]: time="2024-12-13T13:29:56.142465400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 13:29:56.142567 containerd[1709]: time="2024-12-13T13:29:56.142497900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 13:29:56.142567 containerd[1709]: time="2024-12-13T13:29:56.142518000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 13:29:56.143034 containerd[1709]: time="2024-12-13T13:29:56.142552500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 13:29:56.143034 containerd[1709]: time="2024-12-13T13:29:56.142813100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 13:29:56.143034 containerd[1709]: time="2024-12-13T13:29:56.142836600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 13:29:56.143034 containerd[1709]: time="2024-12-13T13:29:56.142887400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 13:29:56.143034 containerd[1709]: time="2024-12-13T13:29:56.142905100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 13:29:56.143034 containerd[1709]: time="2024-12-13T13:29:56.142921200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 13:29:56.143034 containerd[1709]: time="2024-12-13T13:29:56.142936800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 13:29:56.143034 containerd[1709]: time="2024-12-13T13:29:56.142968900Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 13:29:56.143034 containerd[1709]: time="2024-12-13T13:29:56.142998100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 13:29:56.143034 containerd[1709]: time="2024-12-13T13:29:56.143014600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 13:29:56.143844 containerd[1709]: time="2024-12-13T13:29:56.143461200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 13:29:56.143844 containerd[1709]: time="2024-12-13T13:29:56.143564600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 13:29:56.143844 containerd[1709]: time="2024-12-13T13:29:56.143591900Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 13:29:56.143844 containerd[1709]: time="2024-12-13T13:29:56.143622500Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 13:29:56.143844 containerd[1709]: time="2024-12-13T13:29:56.143640100Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 13:29:56.143844 containerd[1709]: time="2024-12-13T13:29:56.143652800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 13:29:56.143844 containerd[1709]: time="2024-12-13T13:29:56.143669300Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 13:29:56.143844 containerd[1709]: time="2024-12-13T13:29:56.143694600Z" level=info msg="NRI interface is disabled by configuration." Dec 13 13:29:56.143844 containerd[1709]: time="2024-12-13T13:29:56.143709400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 13:29:56.144928 containerd[1709]: time="2024-12-13T13:29:56.144593900Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 13:29:56.144928 containerd[1709]: time="2024-12-13T13:29:56.144680400Z" level=info msg="Connect containerd service" Dec 13 13:29:56.144928 containerd[1709]: time="2024-12-13T13:29:56.144737700Z" level=info msg="using legacy CRI server" Dec 13 13:29:56.144928 containerd[1709]: time="2024-12-13T13:29:56.144773500Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 13:29:56.145639 containerd[1709]: time="2024-12-13T13:29:56.145347800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 13:29:56.146437 containerd[1709]: time="2024-12-13T13:29:56.146347000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:29:56.146692 containerd[1709]: time="2024-12-13T13:29:56.146481700Z" level=info msg="Start subscribing containerd event" Dec 13 13:29:56.146692 containerd[1709]: time="2024-12-13T13:29:56.146540100Z" level=info msg="Start recovering state" Dec 13 13:29:56.146692 containerd[1709]: time="2024-12-13T13:29:56.146610400Z" level=info msg="Start event monitor" Dec 13 13:29:56.146692 containerd[1709]: time="2024-12-13T13:29:56.146629100Z" level=info msg="Start snapshots syncer" Dec 13 13:29:56.146692 containerd[1709]: time="2024-12-13T13:29:56.146642000Z" level=info msg="Start cni network conf syncer for default" Dec 13 13:29:56.146692 containerd[1709]: time="2024-12-13T13:29:56.146650400Z" level=info msg="Start streaming server" Dec 13 13:29:56.147204 containerd[1709]: time="2024-12-13T13:29:56.147056100Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 13:29:56.147204 containerd[1709]: time="2024-12-13T13:29:56.147117100Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 13:29:56.147770 containerd[1709]: time="2024-12-13T13:29:56.147337300Z" level=info msg="containerd successfully booted in 0.089611s" Dec 13 13:29:56.148916 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 13:29:56.161979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:29:56.165937 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 13:29:56.169577 systemd[1]: Startup finished in 615ms (firmware) + 22.663s (loader) + 898ms (kernel) + 9.960s (initrd) + 10.482s (userspace) = 44.621s. Dec 13 13:29:56.181419 (kubelet)[1860]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:29:56.202643 agetty[1843]: failed to open credentials directory Dec 13 13:29:56.204639 agetty[1844]: failed to open credentials directory Dec 13 13:29:56.680475 login[1843]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 13:29:56.682390 login[1844]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 13:29:56.696738 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 13:29:56.706090 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 13:29:56.714220 systemd-logind[1691]: New session 1 of user core. Dec 13 13:29:56.721536 systemd-logind[1691]: New session 2 of user core. Dec 13 13:29:56.728003 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 13:29:56.737092 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 13:29:56.754300 (systemd)[1873]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:29:56.787071 kubelet[1860]: E1213 13:29:56.786941 1860 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:29:56.791870 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:29:56.792046 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:29:56.912016 systemd[1873]: Queued start job for default target default.target. Dec 13 13:29:56.919858 systemd[1873]: Created slice app.slice - User Application Slice. Dec 13 13:29:56.919892 systemd[1873]: Reached target paths.target - Paths. Dec 13 13:29:56.919910 systemd[1873]: Reached target timers.target - Timers. Dec 13 13:29:56.921175 systemd[1873]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 13:29:56.932526 systemd[1873]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 13:29:56.932653 systemd[1873]: Reached target sockets.target - Sockets. Dec 13 13:29:56.932672 systemd[1873]: Reached target basic.target - Basic System. Dec 13 13:29:56.932715 systemd[1873]: Reached target default.target - Main User Target. Dec 13 13:29:56.932762 systemd[1873]: Startup finished in 164ms. Dec 13 13:29:56.933015 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 13:29:56.938907 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 13:29:56.939687 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 13:29:57.222435 waagent[1840]: 2024-12-13T13:29:57.222267Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Dec 13 13:29:57.225415 waagent[1840]: 2024-12-13T13:29:57.225351Z INFO Daemon Daemon OS: flatcar 4186.0.0 Dec 13 13:29:57.227758 waagent[1840]: 2024-12-13T13:29:57.227690Z INFO Daemon Daemon Python: 3.11.10 Dec 13 13:29:57.229899 waagent[1840]: 2024-12-13T13:29:57.229838Z INFO Daemon Daemon Run daemon Dec 13 13:29:57.231772 waagent[1840]: 2024-12-13T13:29:57.231706Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4186.0.0' Dec 13 13:29:57.236761 waagent[1840]: 2024-12-13T13:29:57.235380Z INFO Daemon Daemon Using waagent for provisioning Dec 13 13:29:57.236761 waagent[1840]: 2024-12-13T13:29:57.236524Z INFO Daemon Daemon Activate resource disk Dec 13 13:29:57.237157 waagent[1840]: 2024-12-13T13:29:57.237117Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 13 13:29:57.242693 waagent[1840]: 2024-12-13T13:29:57.242638Z INFO Daemon Daemon Found device: None Dec 13 13:29:57.255776 waagent[1840]: 2024-12-13T13:29:57.243532Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 13 13:29:57.255776 waagent[1840]: 2024-12-13T13:29:57.244278Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 13 13:29:57.255776 waagent[1840]: 2024-12-13T13:29:57.245343Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 13:29:57.255776 waagent[1840]: 2024-12-13T13:29:57.246104Z INFO Daemon Daemon Running default provisioning handler Dec 13 13:29:57.258890 waagent[1840]: 2024-12-13T13:29:57.258577Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 13 13:29:57.264425 waagent[1840]: 2024-12-13T13:29:57.264378Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 13:29:57.268521 waagent[1840]: 2024-12-13T13:29:57.268470Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 13:29:57.272035 waagent[1840]: 2024-12-13T13:29:57.269300Z INFO Daemon Daemon Copying ovf-env.xml Dec 13 13:29:57.346031 waagent[1840]: 2024-12-13T13:29:57.343640Z INFO Daemon Daemon Successfully mounted dvd Dec 13 13:29:57.371736 waagent[1840]: 2024-12-13T13:29:57.371661Z INFO Daemon Daemon Detect protocol endpoint Dec 13 13:29:57.371761 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 13 13:29:57.374482 waagent[1840]: 2024-12-13T13:29:57.374420Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 13:29:57.376940 waagent[1840]: 2024-12-13T13:29:57.376894Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 13 13:29:57.379878 waagent[1840]: 2024-12-13T13:29:57.379829Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 13 13:29:57.382112 waagent[1840]: 2024-12-13T13:29:57.382064Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 13 13:29:57.384192 waagent[1840]: 2024-12-13T13:29:57.384147Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 13 13:29:57.407280 waagent[1840]: 2024-12-13T13:29:57.407234Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 13 13:29:57.414365 waagent[1840]: 2024-12-13T13:29:57.408490Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 13 13:29:57.414365 waagent[1840]: 2024-12-13T13:29:57.409076Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 13 13:29:57.492331 waagent[1840]: 2024-12-13T13:29:57.492178Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 13 13:29:57.495736 waagent[1840]: 2024-12-13T13:29:57.495674Z INFO Daemon Daemon Forcing an update of the goal state. Dec 13 13:29:57.502211 waagent[1840]: 2024-12-13T13:29:57.502160Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 13:29:57.519372 waagent[1840]: 2024-12-13T13:29:57.519313Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Dec 13 13:29:57.532952 waagent[1840]: 2024-12-13T13:29:57.520772Z INFO Daemon Dec 13 13:29:57.532952 waagent[1840]: 2024-12-13T13:29:57.522300Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 84052401-800f-45ed-b99a-6114f6d1bf91 eTag: 880937959381773802 source: Fabric] Dec 13 13:29:57.532952 waagent[1840]: 2024-12-13T13:29:57.523689Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 13 13:29:57.532952 waagent[1840]: 2024-12-13T13:29:57.524616Z INFO Daemon Dec 13 13:29:57.532952 waagent[1840]: 2024-12-13T13:29:57.525367Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 13 13:29:57.535916 waagent[1840]: 2024-12-13T13:29:57.535876Z INFO Daemon Daemon Downloading artifacts profile blob Dec 13 13:29:57.607119 waagent[1840]: 2024-12-13T13:29:57.607058Z INFO Daemon Downloaded certificate {'thumbprint': '92E6F0B3E6C3CEB973930913924ACF1AB51719C6', 'hasPrivateKey': True} Dec 13 13:29:57.617521 waagent[1840]: 2024-12-13T13:29:57.608602Z INFO Daemon Downloaded certificate {'thumbprint': '0A18E31EFFDC3E468C0166023049F5DEDC5891DE', 'hasPrivateKey': False} Dec 13 13:29:57.617521 waagent[1840]: 2024-12-13T13:29:57.609543Z INFO Daemon Fetch goal state completed Dec 13 13:29:57.618135 waagent[1840]: 2024-12-13T13:29:57.618090Z INFO Daemon Daemon Starting provisioning Dec 13 13:29:57.623799 waagent[1840]: 2024-12-13T13:29:57.618954Z INFO Daemon Daemon Handle ovf-env.xml. Dec 13 13:29:57.623799 waagent[1840]: 2024-12-13T13:29:57.619667Z INFO Daemon Daemon Set hostname [ci-4186.0.0-a-6a956dd616] Dec 13 13:29:57.632806 waagent[1840]: 2024-12-13T13:29:57.632737Z INFO Daemon Daemon Publish hostname [ci-4186.0.0-a-6a956dd616] Dec 13 13:29:57.639276 waagent[1840]: 2024-12-13T13:29:57.633919Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 13 13:29:57.639276 waagent[1840]: 2024-12-13T13:29:57.634609Z INFO Daemon Daemon Primary interface is [eth0] Dec 13 13:29:57.655385 systemd-networkd[1499]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:29:57.655394 systemd-networkd[1499]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:29:57.655438 systemd-networkd[1499]: eth0: DHCP lease lost Dec 13 13:29:57.656657 waagent[1840]: 2024-12-13T13:29:57.656596Z INFO Daemon Daemon Create user account if not exists Dec 13 13:29:57.671504 waagent[1840]: 2024-12-13T13:29:57.658004Z INFO Daemon Daemon User core already exists, skip useradd Dec 13 13:29:57.671504 waagent[1840]: 2024-12-13T13:29:57.658779Z INFO Daemon Daemon Configure sudoer Dec 13 13:29:57.671504 waagent[1840]: 2024-12-13T13:29:57.659469Z INFO Daemon Daemon Configure sshd Dec 13 13:29:57.671504 waagent[1840]: 2024-12-13T13:29:57.659850Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 13 13:29:57.671504 waagent[1840]: 2024-12-13T13:29:57.660485Z INFO Daemon Daemon Deploy ssh public key. Dec 13 13:29:57.671571 systemd-networkd[1499]: eth0: DHCPv6 lease lost Dec 13 13:29:57.718795 systemd-networkd[1499]: eth0: DHCPv4 address 10.200.8.33/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 13:30:07.043088 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 13:30:07.048970 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:30:07.141617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:30:07.150015 (kubelet)[1938]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:30:07.757672 kubelet[1938]: E1213 13:30:07.757619 1938 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:30:07.761584 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:30:07.761802 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:30:18.012664 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 13:30:18.017963 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:30:18.110072 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:30:18.114041 (kubelet)[1954]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:30:18.542591 chronyd[1690]: Selected source PHC0 Dec 13 13:30:18.660255 kubelet[1954]: E1213 13:30:18.660197 1954 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:30:18.662938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:30:18.663136 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:30:27.745667 waagent[1840]: 2024-12-13T13:30:27.745586Z INFO Daemon Daemon Provisioning complete Dec 13 13:30:27.760655 waagent[1840]: 2024-12-13T13:30:27.760594Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 13 13:30:27.766713 waagent[1840]: 2024-12-13T13:30:27.761738Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 13 13:30:27.766713 waagent[1840]: 2024-12-13T13:30:27.762448Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Dec 13 13:30:27.880231 waagent[1962]: 2024-12-13T13:30:27.880135Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Dec 13 13:30:27.880630 waagent[1962]: 2024-12-13T13:30:27.880284Z INFO ExtHandler ExtHandler OS: flatcar 4186.0.0 Dec 13 13:30:27.880630 waagent[1962]: 2024-12-13T13:30:27.880364Z INFO ExtHandler ExtHandler Python: 3.11.10 Dec 13 13:30:27.927608 waagent[1962]: 2024-12-13T13:30:27.927536Z INFO ExtHandler ExtHandler Distro: flatcar-4186.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 13:30:27.927817 waagent[1962]: 2024-12-13T13:30:27.927770Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 13:30:27.927926 waagent[1962]: 2024-12-13T13:30:27.927874Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 13:30:27.935221 waagent[1962]: 2024-12-13T13:30:27.935133Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 13:30:27.944605 waagent[1962]: 2024-12-13T13:30:27.944551Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Dec 13 13:30:27.945060 waagent[1962]: 2024-12-13T13:30:27.945006Z INFO ExtHandler Dec 13 13:30:27.945137 waagent[1962]: 2024-12-13T13:30:27.945100Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: ea4da1b9-a17b-4609-8c46-11ccf77c6cee eTag: 880937959381773802 source: Fabric] Dec 13 13:30:27.945452 waagent[1962]: 2024-12-13T13:30:27.945402Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 13 13:30:27.946010 waagent[1962]: 2024-12-13T13:30:27.945957Z INFO ExtHandler Dec 13 13:30:27.946080 waagent[1962]: 2024-12-13T13:30:27.946040Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 13 13:30:27.949727 waagent[1962]: 2024-12-13T13:30:27.949685Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 13 13:30:28.032572 waagent[1962]: 2024-12-13T13:30:28.032508Z INFO ExtHandler Downloaded certificate {'thumbprint': '92E6F0B3E6C3CEB973930913924ACF1AB51719C6', 'hasPrivateKey': True} Dec 13 13:30:28.032963 waagent[1962]: 2024-12-13T13:30:28.032914Z INFO ExtHandler Downloaded certificate {'thumbprint': '0A18E31EFFDC3E468C0166023049F5DEDC5891DE', 'hasPrivateKey': False} Dec 13 13:30:28.033372 waagent[1962]: 2024-12-13T13:30:28.033320Z INFO ExtHandler Fetch goal state completed Dec 13 13:30:28.048109 waagent[1962]: 2024-12-13T13:30:28.048054Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1962 Dec 13 13:30:28.048253 waagent[1962]: 2024-12-13T13:30:28.048208Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 13 13:30:28.049756 waagent[1962]: 2024-12-13T13:30:28.049693Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4186.0.0', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 13:30:28.050133 waagent[1962]: 2024-12-13T13:30:28.050082Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 13:30:28.083419 waagent[1962]: 2024-12-13T13:30:28.083380Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 13:30:28.083596 waagent[1962]: 2024-12-13T13:30:28.083553Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 13:30:28.090336 waagent[1962]: 2024-12-13T13:30:28.090109Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 13:30:28.096585 systemd[1]: Reloading requested from client PID 1977 ('systemctl') (unit waagent.service)... Dec 13 13:30:28.096600 systemd[1]: Reloading... Dec 13 13:30:28.189777 zram_generator::config[2011]: No configuration found. Dec 13 13:30:28.306207 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:30:28.390466 systemd[1]: Reloading finished in 293 ms. Dec 13 13:30:28.422468 waagent[1962]: 2024-12-13T13:30:28.421958Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Dec 13 13:30:28.430147 systemd[1]: Reloading requested from client PID 2068 ('systemctl') (unit waagent.service)... Dec 13 13:30:28.430163 systemd[1]: Reloading... Dec 13 13:30:28.506636 zram_generator::config[2102]: No configuration found. Dec 13 13:30:28.628187 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:30:28.715677 systemd[1]: Reloading finished in 285 ms. Dec 13 13:30:28.741771 waagent[1962]: 2024-12-13T13:30:28.740961Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 13 13:30:28.741771 waagent[1962]: 2024-12-13T13:30:28.741163Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 13 13:30:28.746561 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 13:30:28.759140 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:30:28.902578 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:30:28.912241 (kubelet)[2172]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:30:29.418063 kubelet[2172]: E1213 13:30:29.418008 2172 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:30:29.419826 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:30:29.419993 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:30:29.621142 waagent[1962]: 2024-12-13T13:30:29.621051Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 13 13:30:29.621865 waagent[1962]: 2024-12-13T13:30:29.621803Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Dec 13 13:30:29.622591 waagent[1962]: 2024-12-13T13:30:29.622533Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 13:30:29.622719 waagent[1962]: 2024-12-13T13:30:29.622671Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 13:30:29.623174 waagent[1962]: 2024-12-13T13:30:29.623131Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 13:30:29.623244 waagent[1962]: 2024-12-13T13:30:29.623201Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 13:30:29.623487 waagent[1962]: 2024-12-13T13:30:29.623444Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 13:30:29.623938 waagent[1962]: 2024-12-13T13:30:29.623892Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 13:30:29.623938 waagent[1962]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 13:30:29.623938 waagent[1962]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 13:30:29.623938 waagent[1962]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 13:30:29.623938 waagent[1962]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 13:30:29.623938 waagent[1962]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 13:30:29.623938 waagent[1962]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 13:30:29.624218 waagent[1962]: 2024-12-13T13:30:29.623959Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 13:30:29.624218 waagent[1962]: 2024-12-13T13:30:29.624045Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 13:30:29.624294 waagent[1962]: 2024-12-13T13:30:29.624232Z INFO EnvHandler ExtHandler Configure routes Dec 13 13:30:29.624505 waagent[1962]: 2024-12-13T13:30:29.624320Z INFO EnvHandler ExtHandler Gateway:None Dec 13 13:30:29.624612 waagent[1962]: 2024-12-13T13:30:29.624541Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 13:30:29.625023 waagent[1962]: 2024-12-13T13:30:29.624964Z INFO EnvHandler ExtHandler Routes:None Dec 13 13:30:29.625293 waagent[1962]: 2024-12-13T13:30:29.625231Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 13:30:29.626140 waagent[1962]: 2024-12-13T13:30:29.626045Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 13:30:29.626202 waagent[1962]: 2024-12-13T13:30:29.626147Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 13:30:29.627215 waagent[1962]: 2024-12-13T13:30:29.626833Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 13:30:29.632114 waagent[1962]: 2024-12-13T13:30:29.632077Z INFO ExtHandler ExtHandler Dec 13 13:30:29.632212 waagent[1962]: 2024-12-13T13:30:29.632171Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: d55e565c-52dc-4cf1-9c2f-0302e0a3ef6a correlation 610e227e-5343-4ba5-af29-adadebf6f5f0 created: 2024-12-13T13:29:00.337189Z] Dec 13 13:30:29.633057 waagent[1962]: 2024-12-13T13:30:29.633014Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 13 13:30:29.634390 waagent[1962]: 2024-12-13T13:30:29.634346Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Dec 13 13:30:29.667761 waagent[1962]: 2024-12-13T13:30:29.667688Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 8CCBE158-5C2A-4E57-AF2F-241383EC2BED;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Dec 13 13:30:29.674917 waagent[1962]: 2024-12-13T13:30:29.674824Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 13:30:29.674917 waagent[1962]: Executing ['ip', '-a', '-o', 'link']: Dec 13 13:30:29.674917 waagent[1962]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 13:30:29.674917 waagent[1962]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b6:bc:1e brd ff:ff:ff:ff:ff:ff Dec 13 13:30:29.674917 waagent[1962]: 3: enP60803s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b6:bc:1e brd ff:ff:ff:ff:ff:ff\ altname enP60803p0s2 Dec 13 13:30:29.674917 waagent[1962]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 13:30:29.674917 waagent[1962]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 13:30:29.674917 waagent[1962]: 2: eth0 inet 10.200.8.33/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 13:30:29.674917 waagent[1962]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 13:30:29.674917 waagent[1962]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 13 13:30:29.674917 waagent[1962]: 2: eth0 inet6 fe80::20d:3aff:feb6:bc1e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 13 13:30:29.674917 waagent[1962]: 3: enP60803s1 inet6 fe80::20d:3aff:feb6:bc1e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 13 13:30:29.710818 waagent[1962]: 2024-12-13T13:30:29.710766Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Dec 13 13:30:29.710818 waagent[1962]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 13:30:29.710818 waagent[1962]: pkts bytes target prot opt in out source destination Dec 13 13:30:29.710818 waagent[1962]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 13:30:29.710818 waagent[1962]: pkts bytes target prot opt in out source destination Dec 13 13:30:29.710818 waagent[1962]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 13:30:29.710818 waagent[1962]: pkts bytes target prot opt in out source destination Dec 13 13:30:29.710818 waagent[1962]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 13:30:29.710818 waagent[1962]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 13:30:29.710818 waagent[1962]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 13:30:29.715015 waagent[1962]: 2024-12-13T13:30:29.714963Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 13 13:30:29.715015 waagent[1962]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 13:30:29.715015 waagent[1962]: pkts bytes target prot opt in out source destination Dec 13 13:30:29.715015 waagent[1962]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 13:30:29.715015 waagent[1962]: pkts bytes target prot opt in out source destination Dec 13 13:30:29.715015 waagent[1962]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 13:30:29.715015 waagent[1962]: pkts bytes target prot opt in out source destination Dec 13 13:30:29.715015 waagent[1962]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 13:30:29.715015 waagent[1962]: 9 814 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 13:30:29.715015 waagent[1962]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 13:30:29.715387 waagent[1962]: 2024-12-13T13:30:29.715255Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 13 13:30:39.154965 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Dec 13 13:30:39.526056 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 13:30:39.531971 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:30:39.621562 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:30:39.625876 (kubelet)[2217]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:30:39.834519 update_engine[1692]: I20241213 13:30:39.834342 1692 update_attempter.cc:509] Updating boot flags... Dec 13 13:30:40.232244 kubelet[2217]: E1213 13:30:40.231839 2217 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:30:40.235355 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:30:40.236003 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:30:40.258766 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2240) Dec 13 13:30:50.275591 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 13:30:50.281979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:30:50.380490 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:30:50.384728 (kubelet)[2296]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:30:50.423338 kubelet[2296]: E1213 13:30:50.423289 2296 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:30:50.425920 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:30:50.426122 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:31:00.525925 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 13:31:00.532941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:31:00.633063 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:31:00.641036 (kubelet)[2312]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:31:01.250527 kubelet[2312]: E1213 13:31:01.250451 2312 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:31:01.253033 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:31:01.253240 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:31:07.328281 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 13:31:07.337327 systemd[1]: Started sshd@0-10.200.8.33:22-10.200.16.10:55874.service - OpenSSH per-connection server daemon (10.200.16.10:55874). Dec 13 13:31:08.151796 sshd[2322]: Accepted publickey for core from 10.200.16.10 port 55874 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:31:08.153507 sshd-session[2322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:31:08.157961 systemd-logind[1691]: New session 3 of user core. Dec 13 13:31:08.164181 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 13:31:08.776036 systemd[1]: Started sshd@1-10.200.8.33:22-10.200.16.10:53188.service - OpenSSH per-connection server daemon (10.200.16.10:53188). Dec 13 13:31:09.492736 sshd[2327]: Accepted publickey for core from 10.200.16.10 port 53188 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:31:09.494279 sshd-session[2327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:31:09.498396 systemd-logind[1691]: New session 4 of user core. Dec 13 13:31:09.505909 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 13:31:09.995626 sshd[2329]: Connection closed by 10.200.16.10 port 53188 Dec 13 13:31:09.996613 sshd-session[2327]: pam_unix(sshd:session): session closed for user core Dec 13 13:31:10.000904 systemd[1]: sshd@1-10.200.8.33:22-10.200.16.10:53188.service: Deactivated successfully. Dec 13 13:31:10.003151 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 13:31:10.003887 systemd-logind[1691]: Session 4 logged out. Waiting for processes to exit. Dec 13 13:31:10.004690 systemd-logind[1691]: Removed session 4. Dec 13 13:31:10.120297 systemd[1]: Started sshd@2-10.200.8.33:22-10.200.16.10:53196.service - OpenSSH per-connection server daemon (10.200.16.10:53196). Dec 13 13:31:10.832783 sshd[2334]: Accepted publickey for core from 10.200.16.10 port 53196 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:31:10.834503 sshd-session[2334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:31:10.839010 systemd-logind[1691]: New session 5 of user core. Dec 13 13:31:10.850121 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 13:31:11.276272 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 13 13:31:11.282021 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:31:11.330770 sshd[2336]: Connection closed by 10.200.16.10 port 53196 Dec 13 13:31:11.330967 sshd-session[2334]: pam_unix(sshd:session): session closed for user core Dec 13 13:31:11.334943 systemd-logind[1691]: Session 5 logged out. Waiting for processes to exit. Dec 13 13:31:11.335986 systemd[1]: sshd@2-10.200.8.33:22-10.200.16.10:53196.service: Deactivated successfully. Dec 13 13:31:11.339595 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 13:31:11.342962 systemd-logind[1691]: Removed session 5. Dec 13 13:31:11.377622 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:31:11.381533 (kubelet)[2348]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:31:11.421103 kubelet[2348]: E1213 13:31:11.421058 2348 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:31:11.423473 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:31:11.423670 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:31:11.462026 systemd[1]: Started sshd@3-10.200.8.33:22-10.200.16.10:53210.service - OpenSSH per-connection server daemon (10.200.16.10:53210). Dec 13 13:31:12.169288 sshd[2357]: Accepted publickey for core from 10.200.16.10 port 53210 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:31:12.170977 sshd-session[2357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:31:12.176688 systemd-logind[1691]: New session 6 of user core. Dec 13 13:31:12.182898 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 13:31:12.674422 sshd[2359]: Connection closed by 10.200.16.10 port 53210 Dec 13 13:31:12.675594 sshd-session[2357]: pam_unix(sshd:session): session closed for user core Dec 13 13:31:12.679321 systemd[1]: sshd@3-10.200.8.33:22-10.200.16.10:53210.service: Deactivated successfully. Dec 13 13:31:12.681187 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 13:31:12.681869 systemd-logind[1691]: Session 6 logged out. Waiting for processes to exit. Dec 13 13:31:12.682688 systemd-logind[1691]: Removed session 6. Dec 13 13:31:12.799586 systemd[1]: Started sshd@4-10.200.8.33:22-10.200.16.10:53220.service - OpenSSH per-connection server daemon (10.200.16.10:53220). Dec 13 13:31:13.511003 sshd[2364]: Accepted publickey for core from 10.200.16.10 port 53220 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:31:13.512716 sshd-session[2364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:31:13.517514 systemd-logind[1691]: New session 7 of user core. Dec 13 13:31:13.525179 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 13:31:14.075481 sudo[2367]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 13:31:14.075978 sudo[2367]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:31:14.104889 sudo[2367]: pam_unix(sudo:session): session closed for user root Dec 13 13:31:14.221161 sshd[2366]: Connection closed by 10.200.16.10 port 53220 Dec 13 13:31:14.222358 sshd-session[2364]: pam_unix(sshd:session): session closed for user core Dec 13 13:31:14.225926 systemd[1]: sshd@4-10.200.8.33:22-10.200.16.10:53220.service: Deactivated successfully. Dec 13 13:31:14.228400 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 13:31:14.230085 systemd-logind[1691]: Session 7 logged out. Waiting for processes to exit. Dec 13 13:31:14.231121 systemd-logind[1691]: Removed session 7. Dec 13 13:31:14.349493 systemd[1]: Started sshd@5-10.200.8.33:22-10.200.16.10:53230.service - OpenSSH per-connection server daemon (10.200.16.10:53230). Dec 13 13:31:15.063047 sshd[2372]: Accepted publickey for core from 10.200.16.10 port 53230 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:31:15.064789 sshd-session[2372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:31:15.070481 systemd-logind[1691]: New session 8 of user core. Dec 13 13:31:15.075126 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 13:31:15.450440 sudo[2376]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 13:31:15.450883 sudo[2376]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:31:15.454207 sudo[2376]: pam_unix(sudo:session): session closed for user root Dec 13 13:31:15.459024 sudo[2375]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 13:31:15.459353 sudo[2375]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:31:15.473703 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:31:15.497565 augenrules[2398]: No rules Dec 13 13:31:15.498832 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:31:15.499047 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:31:15.499845 sudo[2375]: pam_unix(sudo:session): session closed for user root Dec 13 13:31:15.618824 sshd[2374]: Connection closed by 10.200.16.10 port 53230 Dec 13 13:31:15.619409 sshd-session[2372]: pam_unix(sshd:session): session closed for user core Dec 13 13:31:15.623606 systemd[1]: sshd@5-10.200.8.33:22-10.200.16.10:53230.service: Deactivated successfully. Dec 13 13:31:15.625597 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 13:31:15.626475 systemd-logind[1691]: Session 8 logged out. Waiting for processes to exit. Dec 13 13:31:15.627517 systemd-logind[1691]: Removed session 8. Dec 13 13:31:15.747313 systemd[1]: Started sshd@6-10.200.8.33:22-10.200.16.10:53232.service - OpenSSH per-connection server daemon (10.200.16.10:53232). Dec 13 13:31:16.465093 sshd[2406]: Accepted publickey for core from 10.200.16.10 port 53232 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:31:16.466812 sshd-session[2406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:31:16.472482 systemd-logind[1691]: New session 9 of user core. Dec 13 13:31:16.481892 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 13:31:16.852018 sudo[2409]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 13:31:16.852365 sudo[2409]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:31:18.524113 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 13:31:18.526084 (dockerd)[2427]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 13:31:19.779426 dockerd[2427]: time="2024-12-13T13:31:19.779302580Z" level=info msg="Starting up" Dec 13 13:31:20.349058 dockerd[2427]: time="2024-12-13T13:31:20.349017112Z" level=info msg="Loading containers: start." Dec 13 13:31:20.567774 kernel: Initializing XFRM netlink socket Dec 13 13:31:20.742025 systemd-networkd[1499]: docker0: Link UP Dec 13 13:31:20.776237 dockerd[2427]: time="2024-12-13T13:31:20.776202084Z" level=info msg="Loading containers: done." Dec 13 13:31:20.842736 dockerd[2427]: time="2024-12-13T13:31:20.842688825Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 13:31:20.843232 dockerd[2427]: time="2024-12-13T13:31:20.842823429Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Dec 13 13:31:20.843232 dockerd[2427]: time="2024-12-13T13:31:20.842970133Z" level=info msg="Daemon has completed initialization" Dec 13 13:31:20.890691 dockerd[2427]: time="2024-12-13T13:31:20.890647425Z" level=info msg="API listen on /run/docker.sock" Dec 13 13:31:20.890812 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 13:31:21.526382 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Dec 13 13:31:21.531966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:31:21.632282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:31:21.636412 (kubelet)[2623]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:31:22.189712 kubelet[2623]: E1213 13:31:22.189654 2623 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:31:22.192567 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:31:22.192800 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:31:22.881952 containerd[1709]: time="2024-12-13T13:31:22.881911123Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 13:31:23.700420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1373849604.mount: Deactivated successfully. Dec 13 13:31:25.489112 containerd[1709]: time="2024-12-13T13:31:25.489015407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:31:25.492337 containerd[1709]: time="2024-12-13T13:31:25.492268788Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675650" Dec 13 13:31:25.497160 containerd[1709]: time="2024-12-13T13:31:25.497090110Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:31:25.501403 containerd[1709]: time="2024-12-13T13:31:25.501354417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:31:25.502972 containerd[1709]: time="2024-12-13T13:31:25.502431844Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 2.62048032s" Dec 13 13:31:25.502972 containerd[1709]: time="2024-12-13T13:31:25.502474945Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 13:31:25.524279 containerd[1709]: time="2024-12-13T13:31:25.524254093Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 13:31:27.284456 containerd[1709]: time="2024-12-13T13:31:27.284399471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:31:27.287226 containerd[1709]: time="2024-12-13T13:31:27.287160240Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606417" Dec 13 13:31:27.292038 containerd[1709]: time="2024-12-13T13:31:27.291974261Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:31:27.299861 containerd[1709]: time="2024-12-13T13:31:27.299789958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:31:27.300802 containerd[1709]: time="2024-12-13T13:31:27.300765682Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 1.776467188s" Dec 13 13:31:27.300886 containerd[1709]: time="2024-12-13T13:31:27.300806683Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 13:31:27.323478 containerd[1709]: time="2024-12-13T13:31:27.323453153Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 13:31:28.576234 containerd[1709]: time="2024-12-13T13:31:28.576157666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:31:28.578532 containerd[1709]: time="2024-12-13T13:31:28.578469924Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783043" Dec 13 13:31:28.581870 containerd[1709]: time="2024-12-13T13:31:28.581813408Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:31:28.589929 containerd[1709]: time="2024-12-13T13:31:28.589892211Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 1.266412558s" Dec 13 13:31:28.589929 containerd[1709]: time="2024-12-13T13:31:28.589925612Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 13:31:28.590775 containerd[1709]: time="2024-12-13T13:31:28.590413824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:31:28.612998 containerd[1709]: time="2024-12-13T13:31:28.612941791Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 13:31:29.902996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1735003935.mount: Deactivated successfully. Dec 13 13:31:30.371125 containerd[1709]: time="2024-12-13T13:31:30.371067018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:31:30.374445 containerd[1709]: time="2024-12-13T13:31:30.374294999Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057478" Dec 13 13:31:30.379077 containerd[1709]: time="2024-12-13T13:31:30.379010017Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:31:30.385625 containerd[1709]: time="2024-12-13T13:31:30.385596883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:31:30.386465 containerd[1709]: time="2024-12-13T13:31:30.386298101Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 1.773299808s" Dec 13 13:31:30.386465 containerd[1709]: time="2024-12-13T13:31:30.386345802Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 13:31:30.408273 containerd[1709]: time="2024-12-13T13:31:30.408248753Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 13:31:30.963568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1535718510.mount: Deactivated successfully. Dec 13 13:31:32.223524 containerd[1709]: time="2024-12-13T13:31:32.223469522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:31:32.226283 containerd[1709]: time="2024-12-13T13:31:32.226236594Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Dec 13 13:31:32.229711 containerd[1709]: time="2024-12-13T13:31:32.229658183Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:31:32.237852 containerd[1709]: time="2024-12-13T13:31:32.237812295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:31:32.239002 containerd[1709]: time="2024-12-13T13:31:32.238844722Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.830564469s" Dec 13 13:31:32.239002 containerd[1709]: time="2024-12-13T13:31:32.238880923Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 13:31:32.260381 containerd[1709]: time="2024-12-13T13:31:32.260358783Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 13:31:32.275469 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Dec 13 13:31:32.280989 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:31:32.377479 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:31:32.381464 (kubelet)[2780]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:31:32.418118 kubelet[2780]: E1213 13:31:32.418036 2780 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:31:32.420148 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:31:32.420352 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:31:39.165224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount358032629.mount: Deactivated successfully. Dec 13 13:31:39.342447 containerd[1709]: time="2024-12-13T13:31:39.342377675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:31:39.391704 containerd[1709]: time="2024-12-13T13:31:39.391629522Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Dec 13 13:31:39.440740 containerd[1709]: time="2024-12-13T13:31:39.440104248Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:31:39.490875 containerd[1709]: time="2024-12-13T13:31:39.490823135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:31:39.492505 containerd[1709]: time="2024-12-13T13:31:39.491948466Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 7.231501381s" Dec 13 13:31:39.492505 containerd[1709]: time="2024-12-13T13:31:39.491995667Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 13:31:39.515535 containerd[1709]: time="2024-12-13T13:31:39.515506911Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 13:31:41.103618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4053760867.mount: Deactivated successfully. Dec 13 13:31:42.526124 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Dec 13 13:31:42.532018 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:31:50.285003 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:31:50.289149 (kubelet)[2808]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:31:50.324820 kubelet[2808]: E1213 13:31:50.324781 2808 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:31:50.326974 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:31:50.327143 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:31:52.869973 containerd[1709]: time="2024-12-13T13:31:52.869874126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:31:52.871949 containerd[1709]: time="2024-12-13T13:31:52.871889679Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Dec 13 13:31:52.875306 containerd[1709]: time="2024-12-13T13:31:52.875242767Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:31:52.883144 containerd[1709]: time="2024-12-13T13:31:52.883084474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:31:52.884401 containerd[1709]: time="2024-12-13T13:31:52.884210603Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 13.368672892s" Dec 13 13:31:52.884401 containerd[1709]: time="2024-12-13T13:31:52.884248104Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 13:31:55.727060 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:31:55.733036 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:31:55.760952 systemd[1]: Reloading requested from client PID 2919 ('systemctl') (unit session-9.scope)... Dec 13 13:31:55.760974 systemd[1]: Reloading... Dec 13 13:31:55.856815 zram_generator::config[2958]: No configuration found. Dec 13 13:31:55.996768 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:31:56.081792 systemd[1]: Reloading finished in 320 ms. Dec 13 13:31:56.131693 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:31:56.136094 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:31:56.138740 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:31:56.138980 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:31:56.144149 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:31:56.397304 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:31:56.403603 (kubelet)[3031]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:31:56.440548 kubelet[3031]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:31:56.440548 kubelet[3031]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:31:56.440548 kubelet[3031]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:31:56.440964 kubelet[3031]: I1213 13:31:56.440622 3031 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:31:56.556826 kubelet[3031]: I1213 13:31:56.556792 3031 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 13:31:56.556826 kubelet[3031]: I1213 13:31:56.556816 3031 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:31:56.557103 kubelet[3031]: I1213 13:31:56.557085 3031 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 13:31:57.052874 kubelet[3031]: I1213 13:31:57.052466 3031 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:31:57.053029 kubelet[3031]: E1213 13:31:57.052969 3031 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.33:6443: connect: connection refused Dec 13 13:31:57.063835 kubelet[3031]: I1213 13:31:57.063795 3031 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:31:57.065139 kubelet[3031]: I1213 13:31:57.065092 3031 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:31:57.065335 kubelet[3031]: I1213 13:31:57.065136 3031 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.0.0-a-6a956dd616","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:31:57.065704 kubelet[3031]: I1213 13:31:57.065681 3031 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:31:57.065704 kubelet[3031]: I1213 13:31:57.065706 3031 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:31:57.065893 kubelet[3031]: I1213 13:31:57.065873 3031 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:31:57.066632 kubelet[3031]: I1213 13:31:57.066612 3031 kubelet.go:400] "Attempting to sync node with API server" Dec 13 13:31:57.066632 kubelet[3031]: I1213 13:31:57.066633 3031 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:31:57.066954 kubelet[3031]: I1213 13:31:57.066661 3031 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:31:57.066954 kubelet[3031]: I1213 13:31:57.066680 3031 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:31:57.071638 kubelet[3031]: W1213 13:31:57.071482 3031 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.33:6443: connect: connection refused Dec 13 13:31:57.071638 kubelet[3031]: E1213 13:31:57.071552 3031 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.33:6443: connect: connection refused Dec 13 13:31:57.072290 kubelet[3031]: W1213 13:31:57.071880 3031 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.0.0-a-6a956dd616&limit=500&resourceVersion=0": dial tcp 10.200.8.33:6443: connect: connection refused Dec 13 13:31:57.072290 kubelet[3031]: E1213 13:31:57.071927 3031 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.0.0-a-6a956dd616&limit=500&resourceVersion=0": dial tcp 10.200.8.33:6443: connect: connection refused Dec 13 13:31:57.072290 kubelet[3031]: I1213 13:31:57.072003 3031 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:31:57.074243 kubelet[3031]: I1213 13:31:57.073608 3031 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:31:57.074243 kubelet[3031]: W1213 13:31:57.073672 3031 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 13:31:57.075458 kubelet[3031]: I1213 13:31:57.075192 3031 server.go:1264] "Started kubelet" Dec 13 13:31:57.083774 kubelet[3031]: E1213 13:31:57.081993 3031 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.33:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.33:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.0.0-a-6a956dd616.1810bfc6cb6fa706 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.0.0-a-6a956dd616,UID:ci-4186.0.0-a-6a956dd616,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.0.0-a-6a956dd616,},FirstTimestamp:2024-12-13 13:31:57.075162886 +0000 UTC m=+0.668054768,LastTimestamp:2024-12-13 13:31:57.075162886 +0000 UTC m=+0.668054768,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.0.0-a-6a956dd616,}" Dec 13 13:31:57.083774 kubelet[3031]: I1213 13:31:57.082151 3031 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:31:57.083774 kubelet[3031]: I1213 13:31:57.083055 3031 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:31:57.083774 kubelet[3031]: I1213 13:31:57.083427 3031 server.go:455] "Adding debug handlers to kubelet server" Dec 13 13:31:57.083774 kubelet[3031]: I1213 13:31:57.083465 3031 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:31:57.086557 kubelet[3031]: I1213 13:31:57.086525 3031 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:31:57.090516 kubelet[3031]: I1213 13:31:57.090501 3031 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 13:31:57.092046 kubelet[3031]: W1213 13:31:57.092006 3031 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.33:6443: connect: connection refused Dec 13 13:31:57.092154 kubelet[3031]: E1213 13:31:57.092142 3031 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.33:6443: connect: connection refused Dec 13 13:31:57.092306 kubelet[3031]: E1213 13:31:57.092273 3031 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.0.0-a-6a956dd616?timeout=10s\": dial tcp 10.200.8.33:6443: connect: connection refused" interval="200ms" Dec 13 13:31:57.092675 kubelet[3031]: I1213 13:31:57.092656 3031 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:31:57.092895 kubelet[3031]: I1213 13:31:57.092878 3031 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:31:57.094881 kubelet[3031]: I1213 13:31:57.094865 3031 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:31:57.094987 kubelet[3031]: I1213 13:31:57.094972 3031 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:31:57.097529 kubelet[3031]: I1213 13:31:57.097511 3031 reconciler.go:26] "Reconciler: start to sync state" Dec 13 13:31:57.110431 kubelet[3031]: E1213 13:31:57.110408 3031 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:31:57.145719 kubelet[3031]: I1213 13:31:57.145689 3031 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:31:57.147790 kubelet[3031]: I1213 13:31:57.147772 3031 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:31:57.147970 kubelet[3031]: I1213 13:31:57.147842 3031 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:31:57.147970 kubelet[3031]: I1213 13:31:57.147870 3031 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 13:31:57.148205 kubelet[3031]: E1213 13:31:57.148162 3031 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:31:57.150841 kubelet[3031]: W1213 13:31:57.150731 3031 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.33:6443: connect: connection refused Dec 13 13:31:57.150841 kubelet[3031]: E1213 13:31:57.150804 3031 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.33:6443: connect: connection refused Dec 13 13:31:57.152364 kubelet[3031]: I1213 13:31:57.152249 3031 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:31:57.152364 kubelet[3031]: I1213 13:31:57.152346 3031 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:31:57.152364 kubelet[3031]: I1213 13:31:57.152365 3031 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:31:57.162297 kubelet[3031]: I1213 13:31:57.162274 3031 policy_none.go:49] "None policy: Start" Dec 13 13:31:57.162814 kubelet[3031]: I1213 13:31:57.162798 3031 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:31:57.162897 kubelet[3031]: I1213 13:31:57.162823 3031 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:31:57.190075 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 13:31:57.192669 kubelet[3031]: I1213 13:31:57.192648 3031 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.0.0-a-6a956dd616" Dec 13 13:31:57.192996 kubelet[3031]: E1213 13:31:57.192970 3031 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.33:6443/api/v1/nodes\": dial tcp 10.200.8.33:6443: connect: connection refused" node="ci-4186.0.0-a-6a956dd616" Dec 13 13:31:57.202475 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 13:31:57.205995 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 13:31:57.216392 kubelet[3031]: I1213 13:31:57.216374 3031 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:31:57.216706 kubelet[3031]: I1213 13:31:57.216670 3031 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 13:31:57.216915 kubelet[3031]: I1213 13:31:57.216901 3031 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:31:57.219272 kubelet[3031]: E1213 13:31:57.219043 3031 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186.0.0-a-6a956dd616\" not found" Dec 13 13:31:57.248472 kubelet[3031]: I1213 13:31:57.248394 3031 topology_manager.go:215] "Topology Admit Handler" podUID="7d2bed3941884b44f4e52176dcd76efd" podNamespace="kube-system" podName="kube-apiserver-ci-4186.0.0-a-6a956dd616" Dec 13 13:31:57.250282 kubelet[3031]: I1213 13:31:57.250250 3031 topology_manager.go:215] "Topology Admit Handler" podUID="50235dfc4ae625fd74146c054a605b06" podNamespace="kube-system" podName="kube-controller-manager-ci-4186.0.0-a-6a956dd616" Dec 13 13:31:57.251965 kubelet[3031]: I1213 13:31:57.251736 3031 topology_manager.go:215] "Topology Admit Handler" podUID="74ab291aa314600021f302074159456e" podNamespace="kube-system" podName="kube-scheduler-ci-4186.0.0-a-6a956dd616" Dec 13 13:31:57.258741 systemd[1]: Created slice kubepods-burstable-pod7d2bed3941884b44f4e52176dcd76efd.slice - libcontainer container kubepods-burstable-pod7d2bed3941884b44f4e52176dcd76efd.slice. Dec 13 13:31:57.275042 systemd[1]: Created slice kubepods-burstable-pod50235dfc4ae625fd74146c054a605b06.slice - libcontainer container kubepods-burstable-pod50235dfc4ae625fd74146c054a605b06.slice. Dec 13 13:31:57.279100 systemd[1]: Created slice kubepods-burstable-pod74ab291aa314600021f302074159456e.slice - libcontainer container kubepods-burstable-pod74ab291aa314600021f302074159456e.slice. Dec 13 13:31:57.292865 kubelet[3031]: E1213 13:31:57.292831 3031 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.0.0-a-6a956dd616?timeout=10s\": dial tcp 10.200.8.33:6443: connect: connection refused" interval="400ms" Dec 13 13:31:57.299000 kubelet[3031]: I1213 13:31:57.298962 3031 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7d2bed3941884b44f4e52176dcd76efd-k8s-certs\") pod \"kube-apiserver-ci-4186.0.0-a-6a956dd616\" (UID: \"7d2bed3941884b44f4e52176dcd76efd\") " pod="kube-system/kube-apiserver-ci-4186.0.0-a-6a956dd616" Dec 13 13:31:57.299000 kubelet[3031]: I1213 13:31:57.298995 3031 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50235dfc4ae625fd74146c054a605b06-ca-certs\") pod \"kube-controller-manager-ci-4186.0.0-a-6a956dd616\" (UID: \"50235dfc4ae625fd74146c054a605b06\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-6a956dd616" Dec 13 13:31:57.299200 kubelet[3031]: I1213 13:31:57.299019 3031 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50235dfc4ae625fd74146c054a605b06-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.0.0-a-6a956dd616\" (UID: \"50235dfc4ae625fd74146c054a605b06\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-6a956dd616" Dec 13 13:31:57.299200 kubelet[3031]: I1213 13:31:57.299040 3031 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50235dfc4ae625fd74146c054a605b06-k8s-certs\") pod \"kube-controller-manager-ci-4186.0.0-a-6a956dd616\" (UID: \"50235dfc4ae625fd74146c054a605b06\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-6a956dd616" Dec 13 13:31:57.299200 kubelet[3031]: I1213 13:31:57.299064 3031 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50235dfc4ae625fd74146c054a605b06-kubeconfig\") pod \"kube-controller-manager-ci-4186.0.0-a-6a956dd616\" (UID: \"50235dfc4ae625fd74146c054a605b06\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-6a956dd616" Dec 13 13:31:57.299200 kubelet[3031]: I1213 13:31:57.299084 3031 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7d2bed3941884b44f4e52176dcd76efd-ca-certs\") pod \"kube-apiserver-ci-4186.0.0-a-6a956dd616\" (UID: \"7d2bed3941884b44f4e52176dcd76efd\") " pod="kube-system/kube-apiserver-ci-4186.0.0-a-6a956dd616" Dec 13 13:31:57.299200 kubelet[3031]: I1213 13:31:57.299106 3031 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7d2bed3941884b44f4e52176dcd76efd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.0.0-a-6a956dd616\" (UID: \"7d2bed3941884b44f4e52176dcd76efd\") " pod="kube-system/kube-apiserver-ci-4186.0.0-a-6a956dd616" Dec 13 13:31:57.299353 kubelet[3031]: I1213 13:31:57.299131 3031 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50235dfc4ae625fd74146c054a605b06-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.0.0-a-6a956dd616\" (UID: \"50235dfc4ae625fd74146c054a605b06\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-6a956dd616" Dec 13 13:31:57.299353 kubelet[3031]: I1213 13:31:57.299156 3031 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/74ab291aa314600021f302074159456e-kubeconfig\") pod \"kube-scheduler-ci-4186.0.0-a-6a956dd616\" (UID: \"74ab291aa314600021f302074159456e\") " pod="kube-system/kube-scheduler-ci-4186.0.0-a-6a956dd616" Dec 13 13:31:57.395377 kubelet[3031]: I1213 13:31:57.395227 3031 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.0.0-a-6a956dd616" Dec 13 13:31:57.395918 kubelet[3031]: E1213 13:31:57.395885 3031 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.33:6443/api/v1/nodes\": dial tcp 10.200.8.33:6443: connect: connection refused" node="ci-4186.0.0-a-6a956dd616" Dec 13 13:31:57.573923 containerd[1709]: time="2024-12-13T13:31:57.573862099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.0.0-a-6a956dd616,Uid:7d2bed3941884b44f4e52176dcd76efd,Namespace:kube-system,Attempt:0,}" Dec 13 13:31:57.578765 containerd[1709]: time="2024-12-13T13:31:57.578520722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.0.0-a-6a956dd616,Uid:50235dfc4ae625fd74146c054a605b06,Namespace:kube-system,Attempt:0,}" Dec 13 13:31:57.582606 containerd[1709]: time="2024-12-13T13:31:57.582277321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.0.0-a-6a956dd616,Uid:74ab291aa314600021f302074159456e,Namespace:kube-system,Attempt:0,}" Dec 13 13:31:57.694407 kubelet[3031]: E1213 13:31:57.694261 3031 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.0.0-a-6a956dd616?timeout=10s\": dial tcp 10.200.8.33:6443: connect: connection refused" interval="800ms" Dec 13 13:31:57.798948 kubelet[3031]: I1213 13:31:57.798770 3031 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.0.0-a-6a956dd616" Dec 13 13:31:57.799158 kubelet[3031]: E1213 13:31:57.799131 3031 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.33:6443/api/v1/nodes\": dial tcp 10.200.8.33:6443: connect: connection refused" node="ci-4186.0.0-a-6a956dd616" Dec 13 13:31:58.168174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3533853904.mount: Deactivated successfully. Dec 13 13:31:58.171964 kubelet[3031]: W1213 13:31:58.171907 3031 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.33:6443: connect: connection refused Dec 13 13:31:58.172064 kubelet[3031]: E1213 13:31:58.171971 3031 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.33:6443: connect: connection refused Dec 13 13:31:58.192371 kubelet[3031]: W1213 13:31:58.192334 3031 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.33:6443: connect: connection refused Dec 13 13:31:58.192371 kubelet[3031]: E1213 13:31:58.192374 3031 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.33:6443: connect: connection refused Dec 13 13:31:58.197143 containerd[1709]: time="2024-12-13T13:31:58.197100688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:31:58.226026 containerd[1709]: time="2024-12-13T13:31:58.225888645Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Dec 13 13:31:58.232918 containerd[1709]: time="2024-12-13T13:31:58.232850128Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:31:58.239549 containerd[1709]: time="2024-12-13T13:31:58.239505603Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:31:58.251171 containerd[1709]: time="2024-12-13T13:31:58.250862401Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:31:58.257157 containerd[1709]: time="2024-12-13T13:31:58.257121666Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:31:58.263234 containerd[1709]: time="2024-12-13T13:31:58.263197826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:31:58.264047 containerd[1709]: time="2024-12-13T13:31:58.264009747Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 685.378822ms" Dec 13 13:31:58.270908 containerd[1709]: time="2024-12-13T13:31:58.270849027Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:31:58.273994 containerd[1709]: time="2024-12-13T13:31:58.273961209Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 699.966706ms" Dec 13 13:31:58.296928 containerd[1709]: time="2024-12-13T13:31:58.296738208Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 714.375484ms" Dec 13 13:31:58.494900 kubelet[3031]: E1213 13:31:58.494768 3031 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.0.0-a-6a956dd616?timeout=10s\": dial tcp 10.200.8.33:6443: connect: connection refused" interval="1.6s" Dec 13 13:31:58.602780 kubelet[3031]: I1213 13:31:58.602267 3031 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.0.0-a-6a956dd616" Dec 13 13:31:58.602780 kubelet[3031]: E1213 13:31:58.602607 3031 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.33:6443/api/v1/nodes\": dial tcp 10.200.8.33:6443: connect: connection refused" node="ci-4186.0.0-a-6a956dd616" Dec 13 13:31:58.627521 kubelet[3031]: W1213 13:31:58.627481 3031 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.33:6443: connect: connection refused Dec 13 13:31:58.627659 kubelet[3031]: E1213 13:31:58.627530 3031 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.33:6443: connect: connection refused Dec 13 13:31:58.660943 kubelet[3031]: W1213 13:31:58.660887 3031 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.0.0-a-6a956dd616&limit=500&resourceVersion=0": dial tcp 10.200.8.33:6443: connect: connection refused Dec 13 13:31:58.660943 kubelet[3031]: E1213 13:31:58.660946 3031 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.0.0-a-6a956dd616&limit=500&resourceVersion=0": dial tcp 10.200.8.33:6443: connect: connection refused Dec 13 13:31:59.000564 containerd[1709]: time="2024-12-13T13:31:59.000278708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:31:59.000564 containerd[1709]: time="2024-12-13T13:31:59.000324409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:31:59.000564 containerd[1709]: time="2024-12-13T13:31:59.000344509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:31:59.000564 containerd[1709]: time="2024-12-13T13:31:59.000426112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:31:59.000564 containerd[1709]: time="2024-12-13T13:31:59.000165005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:31:59.000564 containerd[1709]: time="2024-12-13T13:31:59.000237907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:31:59.000564 containerd[1709]: time="2024-12-13T13:31:59.000258307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:31:59.000564 containerd[1709]: time="2024-12-13T13:31:59.000339509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:31:59.007832 containerd[1709]: time="2024-12-13T13:31:59.007026485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:31:59.007832 containerd[1709]: time="2024-12-13T13:31:59.007071386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:31:59.008446 containerd[1709]: time="2024-12-13T13:31:59.007165589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:31:59.008446 containerd[1709]: time="2024-12-13T13:31:59.008313919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:31:59.034996 systemd[1]: Started cri-containerd-777a3c9270fcf16e3e82d2f6b58c36b6e24faffd0fd306b90f56ddb764c08d76.scope - libcontainer container 777a3c9270fcf16e3e82d2f6b58c36b6e24faffd0fd306b90f56ddb764c08d76. Dec 13 13:31:59.042942 systemd[1]: Started cri-containerd-7b7db402efe5187ac752e6278a92cbe3a78f9f9aa084eb19336c23373d8c32cd.scope - libcontainer container 7b7db402efe5187ac752e6278a92cbe3a78f9f9aa084eb19336c23373d8c32cd. Dec 13 13:31:59.044361 systemd[1]: Started cri-containerd-a8fe3ff35a01cdebebacf8d7358afb1ebc3840e0820eb688465cd5733cf69867.scope - libcontainer container a8fe3ff35a01cdebebacf8d7358afb1ebc3840e0820eb688465cd5733cf69867. Dec 13 13:31:59.123647 containerd[1709]: time="2024-12-13T13:31:59.123490148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.0.0-a-6a956dd616,Uid:74ab291aa314600021f302074159456e,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b7db402efe5187ac752e6278a92cbe3a78f9f9aa084eb19336c23373d8c32cd\"" Dec 13 13:31:59.126103 containerd[1709]: time="2024-12-13T13:31:59.125803808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.0.0-a-6a956dd616,Uid:50235dfc4ae625fd74146c054a605b06,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8fe3ff35a01cdebebacf8d7358afb1ebc3840e0820eb688465cd5733cf69867\"" Dec 13 13:31:59.127845 kubelet[3031]: E1213 13:31:59.127231 3031 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.33:6443: connect: connection refused Dec 13 13:31:59.131331 containerd[1709]: time="2024-12-13T13:31:59.130946644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.0.0-a-6a956dd616,Uid:7d2bed3941884b44f4e52176dcd76efd,Namespace:kube-system,Attempt:0,} returns sandbox id \"777a3c9270fcf16e3e82d2f6b58c36b6e24faffd0fd306b90f56ddb764c08d76\"" Dec 13 13:31:59.137064 containerd[1709]: time="2024-12-13T13:31:59.137033604Z" level=info msg="CreateContainer within sandbox \"a8fe3ff35a01cdebebacf8d7358afb1ebc3840e0820eb688465cd5733cf69867\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 13:31:59.137377 containerd[1709]: time="2024-12-13T13:31:59.137266110Z" level=info msg="CreateContainer within sandbox \"7b7db402efe5187ac752e6278a92cbe3a78f9f9aa084eb19336c23373d8c32cd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 13:31:59.137918 containerd[1709]: time="2024-12-13T13:31:59.137895126Z" level=info msg="CreateContainer within sandbox \"777a3c9270fcf16e3e82d2f6b58c36b6e24faffd0fd306b90f56ddb764c08d76\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 13:31:59.220671 containerd[1709]: time="2024-12-13T13:31:59.220634002Z" level=info msg="CreateContainer within sandbox \"7b7db402efe5187ac752e6278a92cbe3a78f9f9aa084eb19336c23373d8c32cd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"56dd967f063e7f747e8750369a9d283ddadcd419e8c7b083e897481f3992b90d\"" Dec 13 13:31:59.221114 containerd[1709]: time="2024-12-13T13:31:59.221085114Z" level=info msg="StartContainer for \"56dd967f063e7f747e8750369a9d283ddadcd419e8c7b083e897481f3992b90d\"" Dec 13 13:31:59.233966 containerd[1709]: time="2024-12-13T13:31:59.233937752Z" level=info msg="CreateContainer within sandbox \"a8fe3ff35a01cdebebacf8d7358afb1ebc3840e0820eb688465cd5733cf69867\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3e413e71c650768e70b56fe2a0894b477870ec923413386ebce82699b8f403cc\"" Dec 13 13:31:59.234911 containerd[1709]: time="2024-12-13T13:31:59.234884677Z" level=info msg="StartContainer for \"3e413e71c650768e70b56fe2a0894b477870ec923413386ebce82699b8f403cc\"" Dec 13 13:31:59.239164 containerd[1709]: time="2024-12-13T13:31:59.239026986Z" level=info msg="CreateContainer within sandbox \"777a3c9270fcf16e3e82d2f6b58c36b6e24faffd0fd306b90f56ddb764c08d76\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"30cb3c508fb94549a9709f007ae912c09114ffda432478b66b5a6b53a3b0af66\"" Dec 13 13:31:59.239837 containerd[1709]: time="2024-12-13T13:31:59.239438496Z" level=info msg="StartContainer for \"30cb3c508fb94549a9709f007ae912c09114ffda432478b66b5a6b53a3b0af66\"" Dec 13 13:31:59.254935 systemd[1]: Started cri-containerd-56dd967f063e7f747e8750369a9d283ddadcd419e8c7b083e897481f3992b90d.scope - libcontainer container 56dd967f063e7f747e8750369a9d283ddadcd419e8c7b083e897481f3992b90d. Dec 13 13:31:59.291968 systemd[1]: Started cri-containerd-3e413e71c650768e70b56fe2a0894b477870ec923413386ebce82699b8f403cc.scope - libcontainer container 3e413e71c650768e70b56fe2a0894b477870ec923413386ebce82699b8f403cc. Dec 13 13:31:59.301038 systemd[1]: Started cri-containerd-30cb3c508fb94549a9709f007ae912c09114ffda432478b66b5a6b53a3b0af66.scope - libcontainer container 30cb3c508fb94549a9709f007ae912c09114ffda432478b66b5a6b53a3b0af66. Dec 13 13:31:59.321391 containerd[1709]: time="2024-12-13T13:31:59.321350950Z" level=info msg="StartContainer for \"56dd967f063e7f747e8750369a9d283ddadcd419e8c7b083e897481f3992b90d\" returns successfully" Dec 13 13:31:59.395770 containerd[1709]: time="2024-12-13T13:31:59.395690105Z" level=info msg="StartContainer for \"3e413e71c650768e70b56fe2a0894b477870ec923413386ebce82699b8f403cc\" returns successfully" Dec 13 13:31:59.407958 containerd[1709]: time="2024-12-13T13:31:59.407690421Z" level=info msg="StartContainer for \"30cb3c508fb94549a9709f007ae912c09114ffda432478b66b5a6b53a3b0af66\" returns successfully" Dec 13 13:32:00.206066 kubelet[3031]: I1213 13:32:00.205587 3031 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.0.0-a-6a956dd616" Dec 13 13:32:01.545229 kubelet[3031]: E1213 13:32:01.545167 3031 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186.0.0-a-6a956dd616\" not found" node="ci-4186.0.0-a-6a956dd616" Dec 13 13:32:01.645107 kubelet[3031]: I1213 13:32:01.645069 3031 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186.0.0-a-6a956dd616" Dec 13 13:32:01.645881 kubelet[3031]: E1213 13:32:01.645461 3031 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4186.0.0-a-6a956dd616.1810bfc6cb6fa706 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.0.0-a-6a956dd616,UID:ci-4186.0.0-a-6a956dd616,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.0.0-a-6a956dd616,},FirstTimestamp:2024-12-13 13:31:57.075162886 +0000 UTC m=+0.668054768,LastTimestamp:2024-12-13 13:31:57.075162886 +0000 UTC m=+0.668054768,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.0.0-a-6a956dd616,}" Dec 13 13:32:01.704953 kubelet[3031]: E1213 13:32:01.704904 3031 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.0.0-a-6a956dd616\" not found" Dec 13 13:32:01.802599 kubelet[3031]: E1213 13:32:01.802378 3031 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4186.0.0-a-6a956dd616.1810bfc6cd894660 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.0.0-a-6a956dd616,UID:ci-4186.0.0-a-6a956dd616,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4186.0.0-a-6a956dd616,},FirstTimestamp:2024-12-13 13:31:57.110396512 +0000 UTC m=+0.703288394,LastTimestamp:2024-12-13 13:31:57.110396512 +0000 UTC m=+0.703288394,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.0.0-a-6a956dd616,}" Dec 13 13:32:01.890830 kubelet[3031]: E1213 13:32:01.890676 3031 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4186.0.0-a-6a956dd616.1810bfc6cffbc580 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.0.0-a-6a956dd616,UID:ci-4186.0.0-a-6a956dd616,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4186.0.0-a-6a956dd616 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4186.0.0-a-6a956dd616,},FirstTimestamp:2024-12-13 13:31:57.151454592 +0000 UTC m=+0.744346574,LastTimestamp:2024-12-13 13:31:57.151454592 +0000 UTC m=+0.744346574,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.0.0-a-6a956dd616,}" Dec 13 13:32:02.073658 kubelet[3031]: I1213 13:32:02.073389 3031 apiserver.go:52] "Watching apiserver" Dec 13 13:32:02.091932 kubelet[3031]: I1213 13:32:02.091896 3031 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 13:32:02.189355 kubelet[3031]: E1213 13:32:02.189306 3031 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186.0.0-a-6a956dd616\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4186.0.0-a-6a956dd616" Dec 13 13:32:02.189942 kubelet[3031]: E1213 13:32:02.189911 3031 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4186.0.0-a-6a956dd616\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4186.0.0-a-6a956dd616" Dec 13 13:32:04.978260 systemd[1]: Reloading requested from client PID 3306 ('systemctl') (unit session-9.scope)... Dec 13 13:32:04.978276 systemd[1]: Reloading... Dec 13 13:32:05.058781 zram_generator::config[3343]: No configuration found. Dec 13 13:32:05.199928 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:32:05.299805 systemd[1]: Reloading finished in 321 ms. Dec 13 13:32:05.341342 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:32:05.349134 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:32:05.349381 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:32:05.359045 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:32:05.615989 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:32:05.625062 (kubelet)[3413]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:32:05.665197 kubelet[3413]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:32:05.665197 kubelet[3413]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:32:05.665197 kubelet[3413]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:32:05.665658 kubelet[3413]: I1213 13:32:05.665278 3413 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:32:05.670314 kubelet[3413]: I1213 13:32:05.670274 3413 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 13:32:05.670314 kubelet[3413]: I1213 13:32:05.670301 3413 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:32:05.670622 kubelet[3413]: I1213 13:32:05.670599 3413 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 13:32:05.672250 kubelet[3413]: I1213 13:32:05.672192 3413 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 13:32:05.675858 kubelet[3413]: I1213 13:32:05.675016 3413 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:32:05.682779 kubelet[3413]: I1213 13:32:05.682761 3413 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:32:05.683135 kubelet[3413]: I1213 13:32:05.683108 3413 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:32:05.683340 kubelet[3413]: I1213 13:32:05.683197 3413 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.0.0-a-6a956dd616","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:32:05.683461 kubelet[3413]: I1213 13:32:05.683453 3413 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:32:05.683505 kubelet[3413]: I1213 13:32:05.683501 3413 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:32:05.683583 kubelet[3413]: I1213 13:32:05.683577 3413 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:32:05.683719 kubelet[3413]: I1213 13:32:05.683708 3413 kubelet.go:400] "Attempting to sync node with API server" Dec 13 13:32:05.683836 kubelet[3413]: I1213 13:32:05.683828 3413 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:32:05.683910 kubelet[3413]: I1213 13:32:05.683902 3413 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:32:05.683977 kubelet[3413]: I1213 13:32:05.683969 3413 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:32:05.687739 kubelet[3413]: I1213 13:32:05.687722 3413 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:32:05.687999 kubelet[3413]: I1213 13:32:05.687986 3413 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:32:05.688471 kubelet[3413]: I1213 13:32:05.688455 3413 server.go:1264] "Started kubelet" Dec 13 13:32:05.695769 kubelet[3413]: I1213 13:32:05.693469 3413 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:32:05.695769 kubelet[3413]: I1213 13:32:05.693781 3413 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:32:05.695769 kubelet[3413]: I1213 13:32:05.693818 3413 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:32:05.695769 kubelet[3413]: I1213 13:32:05.694885 3413 server.go:455] "Adding debug handlers to kubelet server" Dec 13 13:32:05.696991 kubelet[3413]: I1213 13:32:05.696173 3413 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:32:05.696991 kubelet[3413]: I1213 13:32:05.696319 3413 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:32:05.699788 kubelet[3413]: I1213 13:32:05.699403 3413 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 13:32:05.699788 kubelet[3413]: I1213 13:32:05.699547 3413 reconciler.go:26] "Reconciler: start to sync state" Dec 13 13:32:05.707861 kubelet[3413]: E1213 13:32:05.707816 3413 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:32:05.712650 kubelet[3413]: I1213 13:32:05.712604 3413 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:32:05.713803 kubelet[3413]: I1213 13:32:05.712702 3413 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:32:05.717212 kubelet[3413]: I1213 13:32:05.716814 3413 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:32:05.717403 kubelet[3413]: I1213 13:32:05.717384 3413 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:32:05.725003 kubelet[3413]: I1213 13:32:05.724975 3413 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:32:05.725077 kubelet[3413]: I1213 13:32:05.725033 3413 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:32:05.725077 kubelet[3413]: I1213 13:32:05.725061 3413 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 13:32:05.725154 kubelet[3413]: E1213 13:32:05.725104 3413 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:32:05.763873 kubelet[3413]: I1213 13:32:05.763651 3413 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:32:05.763873 kubelet[3413]: I1213 13:32:05.763665 3413 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:32:05.763873 kubelet[3413]: I1213 13:32:05.763682 3413 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:32:05.763873 kubelet[3413]: I1213 13:32:05.763814 3413 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 13:32:05.763873 kubelet[3413]: I1213 13:32:05.763823 3413 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 13:32:05.763873 kubelet[3413]: I1213 13:32:05.763837 3413 policy_none.go:49] "None policy: Start" Dec 13 13:32:05.764608 kubelet[3413]: I1213 13:32:05.764593 3413 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:32:05.765400 kubelet[3413]: I1213 13:32:05.764751 3413 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:32:05.765400 kubelet[3413]: I1213 13:32:05.764875 3413 state_mem.go:75] "Updated machine memory state" Dec 13 13:32:05.769158 kubelet[3413]: I1213 13:32:05.769129 3413 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:32:05.770229 kubelet[3413]: I1213 13:32:05.770172 3413 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 13:32:05.770445 kubelet[3413]: I1213 13:32:05.770434 3413 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:32:05.810720 kubelet[3413]: I1213 13:32:05.810700 3413 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.0.0-a-6a956dd616" Dec 13 13:32:05.821325 kubelet[3413]: I1213 13:32:05.821307 3413 kubelet_node_status.go:112] "Node was previously registered" node="ci-4186.0.0-a-6a956dd616" Dec 13 13:32:05.821450 kubelet[3413]: I1213 13:32:05.821378 3413 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186.0.0-a-6a956dd616" Dec 13 13:32:05.826766 kubelet[3413]: I1213 13:32:05.825813 3413 topology_manager.go:215] "Topology Admit Handler" podUID="74ab291aa314600021f302074159456e" podNamespace="kube-system" podName="kube-scheduler-ci-4186.0.0-a-6a956dd616" Dec 13 13:32:05.826766 kubelet[3413]: I1213 13:32:05.825899 3413 topology_manager.go:215] "Topology Admit Handler" podUID="7d2bed3941884b44f4e52176dcd76efd" podNamespace="kube-system" podName="kube-apiserver-ci-4186.0.0-a-6a956dd616" Dec 13 13:32:05.826766 kubelet[3413]: I1213 13:32:05.825989 3413 topology_manager.go:215] "Topology Admit Handler" podUID="50235dfc4ae625fd74146c054a605b06" podNamespace="kube-system" podName="kube-controller-manager-ci-4186.0.0-a-6a956dd616" Dec 13 13:32:05.836246 kubelet[3413]: W1213 13:32:05.836225 3413 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 13:32:05.837420 kubelet[3413]: W1213 13:32:05.837404 3413 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 13:32:05.841143 kubelet[3413]: W1213 13:32:05.841006 3413 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 13:32:06.002427 kubelet[3413]: I1213 13:32:06.000516 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7d2bed3941884b44f4e52176dcd76efd-ca-certs\") pod \"kube-apiserver-ci-4186.0.0-a-6a956dd616\" (UID: \"7d2bed3941884b44f4e52176dcd76efd\") " pod="kube-system/kube-apiserver-ci-4186.0.0-a-6a956dd616" Dec 13 13:32:06.002427 kubelet[3413]: I1213 13:32:06.000560 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7d2bed3941884b44f4e52176dcd76efd-k8s-certs\") pod \"kube-apiserver-ci-4186.0.0-a-6a956dd616\" (UID: \"7d2bed3941884b44f4e52176dcd76efd\") " pod="kube-system/kube-apiserver-ci-4186.0.0-a-6a956dd616" Dec 13 13:32:06.002427 kubelet[3413]: I1213 13:32:06.000589 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50235dfc4ae625fd74146c054a605b06-ca-certs\") pod \"kube-controller-manager-ci-4186.0.0-a-6a956dd616\" (UID: \"50235dfc4ae625fd74146c054a605b06\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-6a956dd616" Dec 13 13:32:06.002427 kubelet[3413]: I1213 13:32:06.000621 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50235dfc4ae625fd74146c054a605b06-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.0.0-a-6a956dd616\" (UID: \"50235dfc4ae625fd74146c054a605b06\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-6a956dd616" Dec 13 13:32:06.002427 kubelet[3413]: I1213 13:32:06.000653 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/74ab291aa314600021f302074159456e-kubeconfig\") pod \"kube-scheduler-ci-4186.0.0-a-6a956dd616\" (UID: \"74ab291aa314600021f302074159456e\") " pod="kube-system/kube-scheduler-ci-4186.0.0-a-6a956dd616" Dec 13 13:32:06.002939 kubelet[3413]: I1213 13:32:06.000733 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7d2bed3941884b44f4e52176dcd76efd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.0.0-a-6a956dd616\" (UID: \"7d2bed3941884b44f4e52176dcd76efd\") " pod="kube-system/kube-apiserver-ci-4186.0.0-a-6a956dd616" Dec 13 13:32:06.002939 kubelet[3413]: I1213 13:32:06.002821 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50235dfc4ae625fd74146c054a605b06-k8s-certs\") pod \"kube-controller-manager-ci-4186.0.0-a-6a956dd616\" (UID: \"50235dfc4ae625fd74146c054a605b06\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-6a956dd616" Dec 13 13:32:06.002939 kubelet[3413]: I1213 13:32:06.002860 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50235dfc4ae625fd74146c054a605b06-kubeconfig\") pod \"kube-controller-manager-ci-4186.0.0-a-6a956dd616\" (UID: \"50235dfc4ae625fd74146c054a605b06\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-6a956dd616" Dec 13 13:32:06.002939 kubelet[3413]: I1213 13:32:06.002885 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50235dfc4ae625fd74146c054a605b06-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.0.0-a-6a956dd616\" (UID: \"50235dfc4ae625fd74146c054a605b06\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-6a956dd616" Dec 13 13:32:06.684569 kubelet[3413]: I1213 13:32:06.684515 3413 apiserver.go:52] "Watching apiserver" Dec 13 13:32:06.700569 kubelet[3413]: I1213 13:32:06.700525 3413 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 13:32:06.761998 kubelet[3413]: I1213 13:32:06.761713 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186.0.0-a-6a956dd616" podStartSLOduration=1.7616973009999999 podStartE2EDuration="1.761697301s" podCreationTimestamp="2024-12-13 13:32:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:32:06.753302181 +0000 UTC m=+1.124439259" watchObservedRunningTime="2024-12-13 13:32:06.761697301 +0000 UTC m=+1.132834379" Dec 13 13:32:06.761998 kubelet[3413]: I1213 13:32:06.761830 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186.0.0-a-6a956dd616" podStartSLOduration=1.7618243040000001 podStartE2EDuration="1.761824304s" podCreationTimestamp="2024-12-13 13:32:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:32:06.761150386 +0000 UTC m=+1.132287464" watchObservedRunningTime="2024-12-13 13:32:06.761824304 +0000 UTC m=+1.132961482" Dec 13 13:32:06.782909 kubelet[3413]: I1213 13:32:06.782859 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186.0.0-a-6a956dd616" podStartSLOduration=1.782843154 podStartE2EDuration="1.782843154s" podCreationTimestamp="2024-12-13 13:32:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:32:06.772055672 +0000 UTC m=+1.143192850" watchObservedRunningTime="2024-12-13 13:32:06.782843154 +0000 UTC m=+1.153980332" Dec 13 13:32:14.172350 sudo[2409]: pam_unix(sudo:session): session closed for user root Dec 13 13:32:14.291221 sshd[2408]: Connection closed by 10.200.16.10 port 53232 Dec 13 13:32:14.291884 sshd-session[2406]: pam_unix(sshd:session): session closed for user core Dec 13 13:32:14.297113 systemd[1]: sshd@6-10.200.8.33:22-10.200.16.10:53232.service: Deactivated successfully. Dec 13 13:32:14.299525 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 13:32:14.299741 systemd[1]: session-9.scope: Consumed 4.418s CPU time, 185.4M memory peak, 0B memory swap peak. Dec 13 13:32:14.300456 systemd-logind[1691]: Session 9 logged out. Waiting for processes to exit. Dec 13 13:32:14.301514 systemd-logind[1691]: Removed session 9. Dec 13 13:32:21.541292 kubelet[3413]: I1213 13:32:21.541234 3413 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 13:32:21.543882 containerd[1709]: time="2024-12-13T13:32:21.543816915Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 13:32:21.544272 kubelet[3413]: I1213 13:32:21.544151 3413 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 13:32:21.982814 kubelet[3413]: I1213 13:32:21.980483 3413 topology_manager.go:215] "Topology Admit Handler" podUID="743c7ac6-79c6-415b-bf59-874b3050c46a" podNamespace="kube-system" podName="kube-proxy-96lz6" Dec 13 13:32:21.992634 systemd[1]: Created slice kubepods-besteffort-pod743c7ac6_79c6_415b_bf59_874b3050c46a.slice - libcontainer container kubepods-besteffort-pod743c7ac6_79c6_415b_bf59_874b3050c46a.slice. Dec 13 13:32:22.006537 kubelet[3413]: I1213 13:32:22.006493 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/743c7ac6-79c6-415b-bf59-874b3050c46a-lib-modules\") pod \"kube-proxy-96lz6\" (UID: \"743c7ac6-79c6-415b-bf59-874b3050c46a\") " pod="kube-system/kube-proxy-96lz6" Dec 13 13:32:22.006825 kubelet[3413]: I1213 13:32:22.006548 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/743c7ac6-79c6-415b-bf59-874b3050c46a-kube-proxy\") pod \"kube-proxy-96lz6\" (UID: \"743c7ac6-79c6-415b-bf59-874b3050c46a\") " pod="kube-system/kube-proxy-96lz6" Dec 13 13:32:22.006825 kubelet[3413]: I1213 13:32:22.006574 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/743c7ac6-79c6-415b-bf59-874b3050c46a-xtables-lock\") pod \"kube-proxy-96lz6\" (UID: \"743c7ac6-79c6-415b-bf59-874b3050c46a\") " pod="kube-system/kube-proxy-96lz6" Dec 13 13:32:22.006825 kubelet[3413]: I1213 13:32:22.006595 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsv5x\" (UniqueName: \"kubernetes.io/projected/743c7ac6-79c6-415b-bf59-874b3050c46a-kube-api-access-rsv5x\") pod \"kube-proxy-96lz6\" (UID: \"743c7ac6-79c6-415b-bf59-874b3050c46a\") " pod="kube-system/kube-proxy-96lz6" Dec 13 13:32:22.115736 kubelet[3413]: E1213 13:32:22.115320 3413 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 13:32:22.115736 kubelet[3413]: E1213 13:32:22.115388 3413 projected.go:200] Error preparing data for projected volume kube-api-access-rsv5x for pod kube-system/kube-proxy-96lz6: configmap "kube-root-ca.crt" not found Dec 13 13:32:22.115736 kubelet[3413]: E1213 13:32:22.115481 3413 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/743c7ac6-79c6-415b-bf59-874b3050c46a-kube-api-access-rsv5x podName:743c7ac6-79c6-415b-bf59-874b3050c46a nodeName:}" failed. No retries permitted until 2024-12-13 13:32:22.615452766 +0000 UTC m=+16.986589844 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rsv5x" (UniqueName: "kubernetes.io/projected/743c7ac6-79c6-415b-bf59-874b3050c46a-kube-api-access-rsv5x") pod "kube-proxy-96lz6" (UID: "743c7ac6-79c6-415b-bf59-874b3050c46a") : configmap "kube-root-ca.crt" not found Dec 13 13:32:22.606055 kubelet[3413]: I1213 13:32:22.605301 3413 topology_manager.go:215] "Topology Admit Handler" podUID="be051a68-a938-4928-8b59-292b553fd635" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-fzzcx" Dec 13 13:32:22.619058 systemd[1]: Created slice kubepods-besteffort-podbe051a68_a938_4928_8b59_292b553fd635.slice - libcontainer container kubepods-besteffort-podbe051a68_a938_4928_8b59_292b553fd635.slice. Dec 13 13:32:22.710555 kubelet[3413]: I1213 13:32:22.710398 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/be051a68-a938-4928-8b59-292b553fd635-var-lib-calico\") pod \"tigera-operator-7bc55997bb-fzzcx\" (UID: \"be051a68-a938-4928-8b59-292b553fd635\") " pod="tigera-operator/tigera-operator-7bc55997bb-fzzcx" Dec 13 13:32:22.710555 kubelet[3413]: I1213 13:32:22.710442 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-274lz\" (UniqueName: \"kubernetes.io/projected/be051a68-a938-4928-8b59-292b553fd635-kube-api-access-274lz\") pod \"tigera-operator-7bc55997bb-fzzcx\" (UID: \"be051a68-a938-4928-8b59-292b553fd635\") " pod="tigera-operator/tigera-operator-7bc55997bb-fzzcx" Dec 13 13:32:22.906763 containerd[1709]: time="2024-12-13T13:32:22.906318565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-96lz6,Uid:743c7ac6-79c6-415b-bf59-874b3050c46a,Namespace:kube-system,Attempt:0,}" Dec 13 13:32:22.923036 containerd[1709]: time="2024-12-13T13:32:22.922996710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-fzzcx,Uid:be051a68-a938-4928-8b59-292b553fd635,Namespace:tigera-operator,Attempt:0,}" Dec 13 13:32:23.259444 containerd[1709]: time="2024-12-13T13:32:23.259163978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:32:23.259444 containerd[1709]: time="2024-12-13T13:32:23.259214280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:32:23.259444 containerd[1709]: time="2024-12-13T13:32:23.259229180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:32:23.259444 containerd[1709]: time="2024-12-13T13:32:23.259296082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:32:23.287889 systemd[1]: Started cri-containerd-a21ec25aa29d67faa4a25a5d94a1fcbe61b502a6a075e0ac88be55e7bcb66349.scope - libcontainer container a21ec25aa29d67faa4a25a5d94a1fcbe61b502a6a075e0ac88be55e7bcb66349. Dec 13 13:32:23.311733 containerd[1709]: time="2024-12-13T13:32:23.311687580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-96lz6,Uid:743c7ac6-79c6-415b-bf59-874b3050c46a,Namespace:kube-system,Attempt:0,} returns sandbox id \"a21ec25aa29d67faa4a25a5d94a1fcbe61b502a6a075e0ac88be55e7bcb66349\"" Dec 13 13:32:23.315349 containerd[1709]: time="2024-12-13T13:32:23.315181373Z" level=info msg="CreateContainer within sandbox \"a21ec25aa29d67faa4a25a5d94a1fcbe61b502a6a075e0ac88be55e7bcb66349\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 13:32:23.408550 containerd[1709]: time="2024-12-13T13:32:23.407735942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:32:23.408873 containerd[1709]: time="2024-12-13T13:32:23.408800570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:32:23.408873 containerd[1709]: time="2024-12-13T13:32:23.408833471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:32:23.409110 containerd[1709]: time="2024-12-13T13:32:23.409066277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:32:23.426909 systemd[1]: Started cri-containerd-25a5be968014f7eb04f3419773ddfb3e13e096573b9b121824ce74f85edcc5f1.scope - libcontainer container 25a5be968014f7eb04f3419773ddfb3e13e096573b9b121824ce74f85edcc5f1. Dec 13 13:32:23.464274 containerd[1709]: time="2024-12-13T13:32:23.464243249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-fzzcx,Uid:be051a68-a938-4928-8b59-292b553fd635,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"25a5be968014f7eb04f3419773ddfb3e13e096573b9b121824ce74f85edcc5f1\"" Dec 13 13:32:23.466519 containerd[1709]: time="2024-12-13T13:32:23.466359006Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 13:32:23.720584 systemd[1]: run-containerd-runc-k8s.io-a21ec25aa29d67faa4a25a5d94a1fcbe61b502a6a075e0ac88be55e7bcb66349-runc.PkX123.mount: Deactivated successfully. Dec 13 13:32:23.737025 containerd[1709]: time="2024-12-13T13:32:23.736923024Z" level=info msg="CreateContainer within sandbox \"a21ec25aa29d67faa4a25a5d94a1fcbe61b502a6a075e0ac88be55e7bcb66349\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"19cf8c053201341b5d664f785af29d88ea9c924fec0ec6e09aee57701d1f1ffd\"" Dec 13 13:32:23.737664 containerd[1709]: time="2024-12-13T13:32:23.737630743Z" level=info msg="StartContainer for \"19cf8c053201341b5d664f785af29d88ea9c924fec0ec6e09aee57701d1f1ffd\"" Dec 13 13:32:23.771909 systemd[1]: Started cri-containerd-19cf8c053201341b5d664f785af29d88ea9c924fec0ec6e09aee57701d1f1ffd.scope - libcontainer container 19cf8c053201341b5d664f785af29d88ea9c924fec0ec6e09aee57701d1f1ffd. Dec 13 13:32:23.825952 containerd[1709]: time="2024-12-13T13:32:23.825896098Z" level=info msg="StartContainer for \"19cf8c053201341b5d664f785af29d88ea9c924fec0ec6e09aee57701d1f1ffd\" returns successfully" Dec 13 13:32:25.744299 kubelet[3413]: I1213 13:32:25.743711 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-96lz6" podStartSLOduration=4.743688462 podStartE2EDuration="4.743688462s" podCreationTimestamp="2024-12-13 13:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:32:24.801759832 +0000 UTC m=+19.172897010" watchObservedRunningTime="2024-12-13 13:32:25.743688462 +0000 UTC m=+20.114825540" Dec 13 13:32:27.046493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1874884332.mount: Deactivated successfully. Dec 13 13:32:28.191066 containerd[1709]: time="2024-12-13T13:32:28.191003659Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:28.197706 containerd[1709]: time="2024-12-13T13:32:28.197588029Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21763681" Dec 13 13:32:28.248133 containerd[1709]: time="2024-12-13T13:32:28.247981128Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:28.267439 containerd[1709]: time="2024-12-13T13:32:28.267293926Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:28.269324 containerd[1709]: time="2024-12-13T13:32:28.269054272Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 4.802657764s" Dec 13 13:32:28.269324 containerd[1709]: time="2024-12-13T13:32:28.269094873Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 13:32:28.272866 containerd[1709]: time="2024-12-13T13:32:28.272666165Z" level=info msg="CreateContainer within sandbox \"25a5be968014f7eb04f3419773ddfb3e13e096573b9b121824ce74f85edcc5f1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 13:32:28.542964 containerd[1709]: time="2024-12-13T13:32:28.542895532Z" level=info msg="CreateContainer within sandbox \"25a5be968014f7eb04f3419773ddfb3e13e096573b9b121824ce74f85edcc5f1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"43a3b07a483e81f1347f635e8473934a24145758a3a73310f4f2b3a266862ee2\"" Dec 13 13:32:28.544367 containerd[1709]: time="2024-12-13T13:32:28.543627851Z" level=info msg="StartContainer for \"43a3b07a483e81f1347f635e8473934a24145758a3a73310f4f2b3a266862ee2\"" Dec 13 13:32:28.581965 systemd[1]: Started cri-containerd-43a3b07a483e81f1347f635e8473934a24145758a3a73310f4f2b3a266862ee2.scope - libcontainer container 43a3b07a483e81f1347f635e8473934a24145758a3a73310f4f2b3a266862ee2. Dec 13 13:32:28.615995 containerd[1709]: time="2024-12-13T13:32:28.615917515Z" level=info msg="StartContainer for \"43a3b07a483e81f1347f635e8473934a24145758a3a73310f4f2b3a266862ee2\" returns successfully" Dec 13 13:32:28.816302 kubelet[3413]: I1213 13:32:28.815353 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-fzzcx" podStartSLOduration=2.010497136 podStartE2EDuration="6.815330057s" podCreationTimestamp="2024-12-13 13:32:22 +0000 UTC" firstStartedPulling="2024-12-13 13:32:23.465729289 +0000 UTC m=+17.836866467" lastFinishedPulling="2024-12-13 13:32:28.27056231 +0000 UTC m=+22.641699388" observedRunningTime="2024-12-13 13:32:28.815306956 +0000 UTC m=+23.186444134" watchObservedRunningTime="2024-12-13 13:32:28.815330057 +0000 UTC m=+23.186467135" Dec 13 13:32:31.740518 kubelet[3413]: I1213 13:32:31.740455 3413 topology_manager.go:215] "Topology Admit Handler" podUID="2a3855dc-76af-408f-adae-de792e194c23" podNamespace="calico-system" podName="calico-typha-5df4c8bf5d-gpb4p" Dec 13 13:32:31.750246 systemd[1]: Created slice kubepods-besteffort-pod2a3855dc_76af_408f_adae_de792e194c23.slice - libcontainer container kubepods-besteffort-pod2a3855dc_76af_408f_adae_de792e194c23.slice. Dec 13 13:32:31.751359 kubelet[3413]: W1213 13:32:31.751322 3413 reflector.go:547] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4186.0.0-a-6a956dd616" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4186.0.0-a-6a956dd616' and this object Dec 13 13:32:31.751359 kubelet[3413]: E1213 13:32:31.751371 3413 reflector.go:150] object-"calico-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4186.0.0-a-6a956dd616" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4186.0.0-a-6a956dd616' and this object Dec 13 13:32:31.751544 kubelet[3413]: W1213 13:32:31.751438 3413 reflector.go:547] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-4186.0.0-a-6a956dd616" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4186.0.0-a-6a956dd616' and this object Dec 13 13:32:31.751544 kubelet[3413]: E1213 13:32:31.751451 3413 reflector.go:150] object-"calico-system"/"typha-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-4186.0.0-a-6a956dd616" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4186.0.0-a-6a956dd616' and this object Dec 13 13:32:31.751544 kubelet[3413]: W1213 13:32:31.751484 3413 reflector.go:547] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-4186.0.0-a-6a956dd616" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4186.0.0-a-6a956dd616' and this object Dec 13 13:32:31.751544 kubelet[3413]: E1213 13:32:31.751498 3413 reflector.go:150] object-"calico-system"/"tigera-ca-bundle": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-4186.0.0-a-6a956dd616" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4186.0.0-a-6a956dd616' and this object Dec 13 13:32:31.762197 kubelet[3413]: I1213 13:32:31.762164 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2a3855dc-76af-408f-adae-de792e194c23-typha-certs\") pod \"calico-typha-5df4c8bf5d-gpb4p\" (UID: \"2a3855dc-76af-408f-adae-de792e194c23\") " pod="calico-system/calico-typha-5df4c8bf5d-gpb4p" Dec 13 13:32:31.762345 kubelet[3413]: I1213 13:32:31.762262 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a3855dc-76af-408f-adae-de792e194c23-tigera-ca-bundle\") pod \"calico-typha-5df4c8bf5d-gpb4p\" (UID: \"2a3855dc-76af-408f-adae-de792e194c23\") " pod="calico-system/calico-typha-5df4c8bf5d-gpb4p" Dec 13 13:32:31.762448 kubelet[3413]: I1213 13:32:31.762298 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4ql6\" (UniqueName: \"kubernetes.io/projected/2a3855dc-76af-408f-adae-de792e194c23-kube-api-access-w4ql6\") pod \"calico-typha-5df4c8bf5d-gpb4p\" (UID: \"2a3855dc-76af-408f-adae-de792e194c23\") " pod="calico-system/calico-typha-5df4c8bf5d-gpb4p" Dec 13 13:32:31.849514 kubelet[3413]: I1213 13:32:31.849447 3413 topology_manager.go:215] "Topology Admit Handler" podUID="e691ed20-2d77-4100-92aa-e4caa4f55b8f" podNamespace="calico-system" podName="calico-node-f9x9t" Dec 13 13:32:31.864083 systemd[1]: Created slice kubepods-besteffort-pode691ed20_2d77_4100_92aa_e4caa4f55b8f.slice - libcontainer container kubepods-besteffort-pode691ed20_2d77_4100_92aa_e4caa4f55b8f.slice. Dec 13 13:32:31.964127 kubelet[3413]: I1213 13:32:31.964068 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e691ed20-2d77-4100-92aa-e4caa4f55b8f-cni-net-dir\") pod \"calico-node-f9x9t\" (UID: \"e691ed20-2d77-4100-92aa-e4caa4f55b8f\") " pod="calico-system/calico-node-f9x9t" Dec 13 13:32:31.964127 kubelet[3413]: I1213 13:32:31.964133 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s97l\" (UniqueName: \"kubernetes.io/projected/e691ed20-2d77-4100-92aa-e4caa4f55b8f-kube-api-access-8s97l\") pod \"calico-node-f9x9t\" (UID: \"e691ed20-2d77-4100-92aa-e4caa4f55b8f\") " pod="calico-system/calico-node-f9x9t" Dec 13 13:32:31.964383 kubelet[3413]: I1213 13:32:31.964164 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e691ed20-2d77-4100-92aa-e4caa4f55b8f-flexvol-driver-host\") pod \"calico-node-f9x9t\" (UID: \"e691ed20-2d77-4100-92aa-e4caa4f55b8f\") " pod="calico-system/calico-node-f9x9t" Dec 13 13:32:31.964383 kubelet[3413]: I1213 13:32:31.964199 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e691ed20-2d77-4100-92aa-e4caa4f55b8f-lib-modules\") pod \"calico-node-f9x9t\" (UID: \"e691ed20-2d77-4100-92aa-e4caa4f55b8f\") " pod="calico-system/calico-node-f9x9t" Dec 13 13:32:31.964383 kubelet[3413]: I1213 13:32:31.964223 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e691ed20-2d77-4100-92aa-e4caa4f55b8f-cni-log-dir\") pod \"calico-node-f9x9t\" (UID: \"e691ed20-2d77-4100-92aa-e4caa4f55b8f\") " pod="calico-system/calico-node-f9x9t" Dec 13 13:32:31.964383 kubelet[3413]: I1213 13:32:31.964256 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e691ed20-2d77-4100-92aa-e4caa4f55b8f-xtables-lock\") pod \"calico-node-f9x9t\" (UID: \"e691ed20-2d77-4100-92aa-e4caa4f55b8f\") " pod="calico-system/calico-node-f9x9t" Dec 13 13:32:31.964383 kubelet[3413]: I1213 13:32:31.964290 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e691ed20-2d77-4100-92aa-e4caa4f55b8f-tigera-ca-bundle\") pod \"calico-node-f9x9t\" (UID: \"e691ed20-2d77-4100-92aa-e4caa4f55b8f\") " pod="calico-system/calico-node-f9x9t" Dec 13 13:32:31.964589 kubelet[3413]: I1213 13:32:31.964313 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e691ed20-2d77-4100-92aa-e4caa4f55b8f-var-run-calico\") pod \"calico-node-f9x9t\" (UID: \"e691ed20-2d77-4100-92aa-e4caa4f55b8f\") " pod="calico-system/calico-node-f9x9t" Dec 13 13:32:31.964589 kubelet[3413]: I1213 13:32:31.964334 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e691ed20-2d77-4100-92aa-e4caa4f55b8f-var-lib-calico\") pod \"calico-node-f9x9t\" (UID: \"e691ed20-2d77-4100-92aa-e4caa4f55b8f\") " pod="calico-system/calico-node-f9x9t" Dec 13 13:32:31.964589 kubelet[3413]: I1213 13:32:31.964387 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e691ed20-2d77-4100-92aa-e4caa4f55b8f-policysync\") pod \"calico-node-f9x9t\" (UID: \"e691ed20-2d77-4100-92aa-e4caa4f55b8f\") " pod="calico-system/calico-node-f9x9t" Dec 13 13:32:31.964589 kubelet[3413]: I1213 13:32:31.964418 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e691ed20-2d77-4100-92aa-e4caa4f55b8f-cni-bin-dir\") pod \"calico-node-f9x9t\" (UID: \"e691ed20-2d77-4100-92aa-e4caa4f55b8f\") " pod="calico-system/calico-node-f9x9t" Dec 13 13:32:31.964589 kubelet[3413]: I1213 13:32:31.964443 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e691ed20-2d77-4100-92aa-e4caa4f55b8f-node-certs\") pod \"calico-node-f9x9t\" (UID: \"e691ed20-2d77-4100-92aa-e4caa4f55b8f\") " pod="calico-system/calico-node-f9x9t" Dec 13 13:32:32.034855 kubelet[3413]: I1213 13:32:32.033453 3413 topology_manager.go:215] "Topology Admit Handler" podUID="5e38b74a-209a-4cd3-be7c-117000f59938" podNamespace="calico-system" podName="csi-node-driver-l7zsr" Dec 13 13:32:32.034855 kubelet[3413]: E1213 13:32:32.033849 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l7zsr" podUID="5e38b74a-209a-4cd3-be7c-117000f59938" Dec 13 13:32:32.064875 kubelet[3413]: I1213 13:32:32.064631 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5e38b74a-209a-4cd3-be7c-117000f59938-socket-dir\") pod \"csi-node-driver-l7zsr\" (UID: \"5e38b74a-209a-4cd3-be7c-117000f59938\") " pod="calico-system/csi-node-driver-l7zsr" Dec 13 13:32:32.066471 kubelet[3413]: I1213 13:32:32.066297 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pklrk\" (UniqueName: \"kubernetes.io/projected/5e38b74a-209a-4cd3-be7c-117000f59938-kube-api-access-pklrk\") pod \"csi-node-driver-l7zsr\" (UID: \"5e38b74a-209a-4cd3-be7c-117000f59938\") " pod="calico-system/csi-node-driver-l7zsr" Dec 13 13:32:32.066471 kubelet[3413]: I1213 13:32:32.066421 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5e38b74a-209a-4cd3-be7c-117000f59938-registration-dir\") pod \"csi-node-driver-l7zsr\" (UID: \"5e38b74a-209a-4cd3-be7c-117000f59938\") " pod="calico-system/csi-node-driver-l7zsr" Dec 13 13:32:32.066650 kubelet[3413]: I1213 13:32:32.066506 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5e38b74a-209a-4cd3-be7c-117000f59938-varrun\") pod \"csi-node-driver-l7zsr\" (UID: \"5e38b74a-209a-4cd3-be7c-117000f59938\") " pod="calico-system/csi-node-driver-l7zsr" Dec 13 13:32:32.066650 kubelet[3413]: I1213 13:32:32.066592 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e38b74a-209a-4cd3-be7c-117000f59938-kubelet-dir\") pod \"csi-node-driver-l7zsr\" (UID: \"5e38b74a-209a-4cd3-be7c-117000f59938\") " pod="calico-system/csi-node-driver-l7zsr" Dec 13 13:32:32.070543 kubelet[3413]: E1213 13:32:32.070513 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.070727 kubelet[3413]: W1213 13:32:32.070677 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.071032 kubelet[3413]: E1213 13:32:32.071013 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.071248 kubelet[3413]: E1213 13:32:32.071167 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.071248 kubelet[3413]: W1213 13:32:32.071209 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.071248 kubelet[3413]: E1213 13:32:32.071225 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.071780 kubelet[3413]: E1213 13:32:32.071658 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.071780 kubelet[3413]: W1213 13:32:32.071690 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.071780 kubelet[3413]: E1213 13:32:32.071728 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.075360 kubelet[3413]: E1213 13:32:32.075272 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.075360 kubelet[3413]: W1213 13:32:32.075288 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.075549 kubelet[3413]: E1213 13:32:32.075393 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.078668 kubelet[3413]: E1213 13:32:32.078542 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.078668 kubelet[3413]: W1213 13:32:32.078558 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.078864 kubelet[3413]: E1213 13:32:32.078840 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.079444 kubelet[3413]: E1213 13:32:32.079429 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.079649 kubelet[3413]: W1213 13:32:32.079526 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.079824 kubelet[3413]: E1213 13:32:32.079735 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.081100 kubelet[3413]: E1213 13:32:32.080085 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.081100 kubelet[3413]: W1213 13:32:32.080099 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.081100 kubelet[3413]: E1213 13:32:32.080314 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.081100 kubelet[3413]: W1213 13:32:32.080326 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.081100 kubelet[3413]: E1213 13:32:32.080340 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.081100 kubelet[3413]: E1213 13:32:32.080861 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.081100 kubelet[3413]: E1213 13:32:32.080923 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.081100 kubelet[3413]: W1213 13:32:32.080933 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.081100 kubelet[3413]: E1213 13:32:32.080944 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.167835 kubelet[3413]: E1213 13:32:32.167556 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.167835 kubelet[3413]: W1213 13:32:32.167587 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.167835 kubelet[3413]: E1213 13:32:32.167616 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.168415 kubelet[3413]: E1213 13:32:32.168209 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.168415 kubelet[3413]: W1213 13:32:32.168248 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.168415 kubelet[3413]: E1213 13:32:32.168271 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.168829 kubelet[3413]: E1213 13:32:32.168783 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.168829 kubelet[3413]: W1213 13:32:32.168798 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.168829 kubelet[3413]: E1213 13:32:32.168813 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.169255 kubelet[3413]: E1213 13:32:32.169217 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.169255 kubelet[3413]: W1213 13:32:32.169234 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.169255 kubelet[3413]: E1213 13:32:32.169255 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.169576 kubelet[3413]: E1213 13:32:32.169507 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.169576 kubelet[3413]: W1213 13:32:32.169519 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.169576 kubelet[3413]: E1213 13:32:32.169537 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.170183 kubelet[3413]: E1213 13:32:32.169978 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.170183 kubelet[3413]: W1213 13:32:32.169993 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.170183 kubelet[3413]: E1213 13:32:32.170014 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.170612 kubelet[3413]: E1213 13:32:32.170447 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.170612 kubelet[3413]: W1213 13:32:32.170460 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.170612 kubelet[3413]: E1213 13:32:32.170490 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.170910 kubelet[3413]: E1213 13:32:32.170843 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.170910 kubelet[3413]: W1213 13:32:32.170856 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.171136 kubelet[3413]: E1213 13:32:32.171033 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.171307 kubelet[3413]: E1213 13:32:32.171249 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.171307 kubelet[3413]: W1213 13:32:32.171259 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.171512 kubelet[3413]: E1213 13:32:32.171476 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.172073 kubelet[3413]: E1213 13:32:32.172055 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.172073 kubelet[3413]: W1213 13:32:32.172071 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.172235 kubelet[3413]: E1213 13:32:32.172144 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.172432 kubelet[3413]: E1213 13:32:32.172392 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.172432 kubelet[3413]: W1213 13:32:32.172430 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.172553 kubelet[3413]: E1213 13:32:32.172537 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.172785 kubelet[3413]: E1213 13:32:32.172681 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.172785 kubelet[3413]: W1213 13:32:32.172694 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.172911 kubelet[3413]: E1213 13:32:32.172807 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.173110 kubelet[3413]: E1213 13:32:32.173009 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.173110 kubelet[3413]: W1213 13:32:32.173022 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.173189 kubelet[3413]: E1213 13:32:32.173116 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.173362 kubelet[3413]: E1213 13:32:32.173344 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.173362 kubelet[3413]: W1213 13:32:32.173359 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.173457 kubelet[3413]: E1213 13:32:32.173398 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.173724 kubelet[3413]: E1213 13:32:32.173653 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.173724 kubelet[3413]: W1213 13:32:32.173678 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.173954 kubelet[3413]: E1213 13:32:32.173917 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.174240 kubelet[3413]: E1213 13:32:32.174143 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.174240 kubelet[3413]: W1213 13:32:32.174157 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.174240 kubelet[3413]: E1213 13:32:32.174200 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.174670 kubelet[3413]: E1213 13:32:32.174555 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.174670 kubelet[3413]: W1213 13:32:32.174569 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.174670 kubelet[3413]: E1213 13:32:32.174598 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.175021 kubelet[3413]: E1213 13:32:32.174799 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.175021 kubelet[3413]: W1213 13:32:32.174809 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.175021 kubelet[3413]: E1213 13:32:32.174898 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.175668 kubelet[3413]: E1213 13:32:32.175361 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.175668 kubelet[3413]: W1213 13:32:32.175375 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.175668 kubelet[3413]: E1213 13:32:32.175468 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.175874 kubelet[3413]: E1213 13:32:32.175686 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.175874 kubelet[3413]: W1213 13:32:32.175697 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.175874 kubelet[3413]: E1213 13:32:32.175792 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.176387 kubelet[3413]: E1213 13:32:32.176219 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.176387 kubelet[3413]: W1213 13:32:32.176234 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.176387 kubelet[3413]: E1213 13:32:32.176375 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.176565 kubelet[3413]: E1213 13:32:32.176544 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.176565 kubelet[3413]: W1213 13:32:32.176564 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.176663 kubelet[3413]: E1213 13:32:32.176648 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.177112 kubelet[3413]: E1213 13:32:32.176824 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.177112 kubelet[3413]: W1213 13:32:32.176838 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.177112 kubelet[3413]: E1213 13:32:32.176923 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.177643 kubelet[3413]: E1213 13:32:32.177448 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.177643 kubelet[3413]: W1213 13:32:32.177464 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.177643 kubelet[3413]: E1213 13:32:32.177564 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.177897 kubelet[3413]: E1213 13:32:32.177778 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.177897 kubelet[3413]: W1213 13:32:32.177802 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.177989 kubelet[3413]: E1213 13:32:32.177899 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.178716 kubelet[3413]: E1213 13:32:32.178038 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.178716 kubelet[3413]: W1213 13:32:32.178050 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.178716 kubelet[3413]: E1213 13:32:32.178132 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.178716 kubelet[3413]: E1213 13:32:32.178502 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.178716 kubelet[3413]: W1213 13:32:32.178513 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.178716 kubelet[3413]: E1213 13:32:32.178621 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.179013 kubelet[3413]: E1213 13:32:32.178805 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.179013 kubelet[3413]: W1213 13:32:32.178818 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.179013 kubelet[3413]: E1213 13:32:32.178843 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.179156 kubelet[3413]: E1213 13:32:32.179049 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.179156 kubelet[3413]: W1213 13:32:32.179059 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.179156 kubelet[3413]: E1213 13:32:32.179071 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.179288 kubelet[3413]: E1213 13:32:32.179279 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.179329 kubelet[3413]: W1213 13:32:32.179288 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.179329 kubelet[3413]: E1213 13:32:32.179300 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.276021 kubelet[3413]: E1213 13:32:32.275981 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.276021 kubelet[3413]: W1213 13:32:32.276009 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.276341 kubelet[3413]: E1213 13:32:32.276036 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.276341 kubelet[3413]: E1213 13:32:32.276336 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.276440 kubelet[3413]: W1213 13:32:32.276350 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.276440 kubelet[3413]: E1213 13:32:32.276368 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.276624 kubelet[3413]: E1213 13:32:32.276607 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.276624 kubelet[3413]: W1213 13:32:32.276621 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.276732 kubelet[3413]: E1213 13:32:32.276636 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.276943 kubelet[3413]: E1213 13:32:32.276926 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.276943 kubelet[3413]: W1213 13:32:32.276939 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.277062 kubelet[3413]: E1213 13:32:32.276953 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.277208 kubelet[3413]: E1213 13:32:32.277192 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.277208 kubelet[3413]: W1213 13:32:32.277204 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.277313 kubelet[3413]: E1213 13:32:32.277218 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.277461 kubelet[3413]: E1213 13:32:32.277444 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.277461 kubelet[3413]: W1213 13:32:32.277457 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.277547 kubelet[3413]: E1213 13:32:32.277471 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.378259 kubelet[3413]: E1213 13:32:32.378118 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.378259 kubelet[3413]: W1213 13:32:32.378145 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.378259 kubelet[3413]: E1213 13:32:32.378172 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.378806 kubelet[3413]: E1213 13:32:32.378611 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.378806 kubelet[3413]: W1213 13:32:32.378629 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.378806 kubelet[3413]: E1213 13:32:32.378648 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.379293 kubelet[3413]: E1213 13:32:32.379278 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.379497 kubelet[3413]: W1213 13:32:32.379353 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.379497 kubelet[3413]: E1213 13:32:32.379377 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.379963 kubelet[3413]: E1213 13:32:32.379781 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.379963 kubelet[3413]: W1213 13:32:32.379797 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.379963 kubelet[3413]: E1213 13:32:32.379841 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.380498 kubelet[3413]: E1213 13:32:32.380363 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.380498 kubelet[3413]: W1213 13:32:32.380381 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.380498 kubelet[3413]: E1213 13:32:32.380395 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.380861 kubelet[3413]: E1213 13:32:32.380791 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.380861 kubelet[3413]: W1213 13:32:32.380806 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.380861 kubelet[3413]: E1213 13:32:32.380819 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.481400 kubelet[3413]: E1213 13:32:32.481361 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.481400 kubelet[3413]: W1213 13:32:32.481386 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.481673 kubelet[3413]: E1213 13:32:32.481415 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.481733 kubelet[3413]: E1213 13:32:32.481717 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.481809 kubelet[3413]: W1213 13:32:32.481735 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.481809 kubelet[3413]: E1213 13:32:32.481781 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.482041 kubelet[3413]: E1213 13:32:32.482018 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.482041 kubelet[3413]: W1213 13:32:32.482034 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.482198 kubelet[3413]: E1213 13:32:32.482050 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.482319 kubelet[3413]: E1213 13:32:32.482298 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.482319 kubelet[3413]: W1213 13:32:32.482313 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.482430 kubelet[3413]: E1213 13:32:32.482326 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.482571 kubelet[3413]: E1213 13:32:32.482554 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.482571 kubelet[3413]: W1213 13:32:32.482566 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.482720 kubelet[3413]: E1213 13:32:32.482579 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.482836 kubelet[3413]: E1213 13:32:32.482820 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.482836 kubelet[3413]: W1213 13:32:32.482834 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.482933 kubelet[3413]: E1213 13:32:32.482846 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.584120 kubelet[3413]: E1213 13:32:32.584084 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.584120 kubelet[3413]: W1213 13:32:32.584108 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.584384 kubelet[3413]: E1213 13:32:32.584135 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.584481 kubelet[3413]: E1213 13:32:32.584442 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.584481 kubelet[3413]: W1213 13:32:32.584457 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.584481 kubelet[3413]: E1213 13:32:32.584472 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.584718 kubelet[3413]: E1213 13:32:32.584698 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.584718 kubelet[3413]: W1213 13:32:32.584713 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.584940 kubelet[3413]: E1213 13:32:32.584727 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.585009 kubelet[3413]: E1213 13:32:32.584987 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.585009 kubelet[3413]: W1213 13:32:32.584999 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.585086 kubelet[3413]: E1213 13:32:32.585012 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.585259 kubelet[3413]: E1213 13:32:32.585239 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.585259 kubelet[3413]: W1213 13:32:32.585252 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.585405 kubelet[3413]: E1213 13:32:32.585266 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.585502 kubelet[3413]: E1213 13:32:32.585483 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.585502 kubelet[3413]: W1213 13:32:32.585496 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.585582 kubelet[3413]: E1213 13:32:32.585508 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.619171 kubelet[3413]: E1213 13:32:32.619132 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.619171 kubelet[3413]: W1213 13:32:32.619159 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.619425 kubelet[3413]: E1213 13:32:32.619185 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.620606 kubelet[3413]: E1213 13:32:32.620538 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.620606 kubelet[3413]: W1213 13:32:32.620557 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.620606 kubelet[3413]: E1213 13:32:32.620573 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.687195 kubelet[3413]: E1213 13:32:32.687055 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.687195 kubelet[3413]: W1213 13:32:32.687084 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.687195 kubelet[3413]: E1213 13:32:32.687111 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.688826 kubelet[3413]: E1213 13:32:32.687670 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.688826 kubelet[3413]: W1213 13:32:32.687691 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.688826 kubelet[3413]: E1213 13:32:32.687708 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.688826 kubelet[3413]: E1213 13:32:32.688333 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.688826 kubelet[3413]: W1213 13:32:32.688347 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.688826 kubelet[3413]: E1213 13:32:32.688362 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.689116 kubelet[3413]: E1213 13:32:32.688983 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.689116 kubelet[3413]: W1213 13:32:32.688995 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.689116 kubelet[3413]: E1213 13:32:32.689011 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.790042 kubelet[3413]: E1213 13:32:32.789999 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.790042 kubelet[3413]: W1213 13:32:32.790031 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.790665 kubelet[3413]: E1213 13:32:32.790058 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.790665 kubelet[3413]: E1213 13:32:32.790362 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.790665 kubelet[3413]: W1213 13:32:32.790376 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.790665 kubelet[3413]: E1213 13:32:32.790391 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.790665 kubelet[3413]: E1213 13:32:32.790592 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.790665 kubelet[3413]: W1213 13:32:32.790602 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.790665 kubelet[3413]: E1213 13:32:32.790615 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.790978 kubelet[3413]: E1213 13:32:32.790848 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.790978 kubelet[3413]: W1213 13:32:32.790859 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.790978 kubelet[3413]: E1213 13:32:32.790872 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.863833 kubelet[3413]: E1213 13:32:32.863780 3413 secret.go:194] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Dec 13 13:32:32.864056 kubelet[3413]: E1213 13:32:32.863908 3413 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a3855dc-76af-408f-adae-de792e194c23-typha-certs podName:2a3855dc-76af-408f-adae-de792e194c23 nodeName:}" failed. No retries permitted until 2024-12-13 13:32:33.363878043 +0000 UTC m=+27.735015121 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/2a3855dc-76af-408f-adae-de792e194c23-typha-certs") pod "calico-typha-5df4c8bf5d-gpb4p" (UID: "2a3855dc-76af-408f-adae-de792e194c23") : failed to sync secret cache: timed out waiting for the condition Dec 13 13:32:32.882856 kubelet[3413]: E1213 13:32:32.882800 3413 projected.go:294] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 13 13:32:32.882856 kubelet[3413]: E1213 13:32:32.882852 3413 projected.go:200] Error preparing data for projected volume kube-api-access-w4ql6 for pod calico-system/calico-typha-5df4c8bf5d-gpb4p: failed to sync configmap cache: timed out waiting for the condition Dec 13 13:32:32.883333 kubelet[3413]: E1213 13:32:32.882965 3413 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2a3855dc-76af-408f-adae-de792e194c23-kube-api-access-w4ql6 podName:2a3855dc-76af-408f-adae-de792e194c23 nodeName:}" failed. No retries permitted until 2024-12-13 13:32:33.382938235 +0000 UTC m=+27.754075313 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-w4ql6" (UniqueName: "kubernetes.io/projected/2a3855dc-76af-408f-adae-de792e194c23-kube-api-access-w4ql6") pod "calico-typha-5df4c8bf5d-gpb4p" (UID: "2a3855dc-76af-408f-adae-de792e194c23") : failed to sync configmap cache: timed out waiting for the condition Dec 13 13:32:32.894030 kubelet[3413]: E1213 13:32:32.893502 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.894030 kubelet[3413]: W1213 13:32:32.893530 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.894030 kubelet[3413]: E1213 13:32:32.893555 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.897770 kubelet[3413]: E1213 13:32:32.895515 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.897770 kubelet[3413]: W1213 13:32:32.895532 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.897770 kubelet[3413]: E1213 13:32:32.895552 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.897770 kubelet[3413]: E1213 13:32:32.895783 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.897770 kubelet[3413]: W1213 13:32:32.895793 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.897770 kubelet[3413]: E1213 13:32:32.895808 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.897770 kubelet[3413]: E1213 13:32:32.896016 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.897770 kubelet[3413]: W1213 13:32:32.896025 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.897770 kubelet[3413]: E1213 13:32:32.896038 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.897770 kubelet[3413]: E1213 13:32:32.896221 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.898241 kubelet[3413]: W1213 13:32:32.896229 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.898241 kubelet[3413]: E1213 13:32:32.896239 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.907553 kubelet[3413]: E1213 13:32:32.907521 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.907553 kubelet[3413]: W1213 13:32:32.907546 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.907728 kubelet[3413]: E1213 13:32:32.907570 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.997266 kubelet[3413]: E1213 13:32:32.997123 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.997266 kubelet[3413]: W1213 13:32:32.997151 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.997266 kubelet[3413]: E1213 13:32:32.997180 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:32.998049 kubelet[3413]: E1213 13:32:32.997742 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:32.998049 kubelet[3413]: W1213 13:32:32.997794 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:32.998049 kubelet[3413]: E1213 13:32:32.997811 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:33.070825 containerd[1709]: time="2024-12-13T13:32:33.070419769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f9x9t,Uid:e691ed20-2d77-4100-92aa-e4caa4f55b8f,Namespace:calico-system,Attempt:0,}" Dec 13 13:32:33.098679 kubelet[3413]: E1213 13:32:33.098632 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:33.098679 kubelet[3413]: W1213 13:32:33.098666 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:33.098947 kubelet[3413]: E1213 13:32:33.098697 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:33.099020 kubelet[3413]: E1213 13:32:33.099000 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:33.099020 kubelet[3413]: W1213 13:32:33.099016 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:33.099118 kubelet[3413]: E1213 13:32:33.099032 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:33.200340 kubelet[3413]: E1213 13:32:33.200299 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:33.200340 kubelet[3413]: W1213 13:32:33.200326 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:33.200585 kubelet[3413]: E1213 13:32:33.200353 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:33.200774 kubelet[3413]: E1213 13:32:33.200707 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:33.200774 kubelet[3413]: W1213 13:32:33.200722 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:33.200774 kubelet[3413]: E1213 13:32:33.200738 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:33.305362 kubelet[3413]: E1213 13:32:33.305321 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:33.305362 kubelet[3413]: W1213 13:32:33.305357 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:33.305702 kubelet[3413]: E1213 13:32:33.305386 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:33.305702 kubelet[3413]: E1213 13:32:33.305674 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:33.305702 kubelet[3413]: W1213 13:32:33.305686 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:33.305702 kubelet[3413]: E1213 13:32:33.305701 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:33.321129 containerd[1709]: time="2024-12-13T13:32:33.320121607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:32:33.321584 containerd[1709]: time="2024-12-13T13:32:33.321114633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:32:33.321584 containerd[1709]: time="2024-12-13T13:32:33.321135033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:32:33.321584 containerd[1709]: time="2024-12-13T13:32:33.321345538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:32:33.347952 systemd[1]: Started cri-containerd-71bb6a202b3ea3dfecad0e4e7900674bbbab9454d27304ecdd8a01002789e33f.scope - libcontainer container 71bb6a202b3ea3dfecad0e4e7900674bbbab9454d27304ecdd8a01002789e33f. Dec 13 13:32:33.376765 containerd[1709]: time="2024-12-13T13:32:33.376686565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f9x9t,Uid:e691ed20-2d77-4100-92aa-e4caa4f55b8f,Namespace:calico-system,Attempt:0,} returns sandbox id \"71bb6a202b3ea3dfecad0e4e7900674bbbab9454d27304ecdd8a01002789e33f\"" Dec 13 13:32:33.379418 containerd[1709]: time="2024-12-13T13:32:33.379324333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 13:32:33.406489 kubelet[3413]: E1213 13:32:33.406385 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:33.406489 kubelet[3413]: W1213 13:32:33.406411 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:33.406489 kubelet[3413]: E1213 13:32:33.406433 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:33.406865 kubelet[3413]: E1213 13:32:33.406797 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:33.406865 kubelet[3413]: W1213 13:32:33.406810 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:33.406865 kubelet[3413]: E1213 13:32:33.406833 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:33.407173 kubelet[3413]: E1213 13:32:33.407149 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:33.407247 kubelet[3413]: W1213 13:32:33.407181 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:33.407247 kubelet[3413]: E1213 13:32:33.407203 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:33.407463 kubelet[3413]: E1213 13:32:33.407442 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:33.407463 kubelet[3413]: W1213 13:32:33.407455 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:33.407621 kubelet[3413]: E1213 13:32:33.407474 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:33.407763 kubelet[3413]: E1213 13:32:33.407731 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:33.407763 kubelet[3413]: W1213 13:32:33.407757 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:33.407869 kubelet[3413]: E1213 13:32:33.407843 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:33.408068 kubelet[3413]: E1213 13:32:33.408048 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:33.408068 kubelet[3413]: W1213 13:32:33.408061 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:33.408193 kubelet[3413]: E1213 13:32:33.408086 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:33.408309 kubelet[3413]: E1213 13:32:33.408291 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:33.408309 kubelet[3413]: W1213 13:32:33.408303 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:33.408426 kubelet[3413]: E1213 13:32:33.408319 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:33.408540 kubelet[3413]: E1213 13:32:33.408523 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:33.408540 kubelet[3413]: W1213 13:32:33.408535 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:33.408648 kubelet[3413]: E1213 13:32:33.408557 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:33.408815 kubelet[3413]: E1213 13:32:33.408798 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:33.408815 kubelet[3413]: W1213 13:32:33.408811 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:33.408930 kubelet[3413]: E1213 13:32:33.408833 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:33.409341 kubelet[3413]: E1213 13:32:33.409161 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:33.409341 kubelet[3413]: W1213 13:32:33.409191 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:33.409341 kubelet[3413]: E1213 13:32:33.409206 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:33.413824 kubelet[3413]: E1213 13:32:33.413029 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:33.413824 kubelet[3413]: W1213 13:32:33.413049 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:33.413824 kubelet[3413]: E1213 13:32:33.413079 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:33.416926 kubelet[3413]: E1213 13:32:33.416911 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:32:33.417051 kubelet[3413]: W1213 13:32:33.417039 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:32:33.417136 kubelet[3413]: E1213 13:32:33.417124 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:32:33.555109 containerd[1709]: time="2024-12-13T13:32:33.555050164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5df4c8bf5d-gpb4p,Uid:2a3855dc-76af-408f-adae-de792e194c23,Namespace:calico-system,Attempt:0,}" Dec 13 13:32:33.729459 kubelet[3413]: E1213 13:32:33.727013 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l7zsr" podUID="5e38b74a-209a-4cd3-be7c-117000f59938" Dec 13 13:32:33.804684 containerd[1709]: time="2024-12-13T13:32:33.804244689Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:32:33.804684 containerd[1709]: time="2024-12-13T13:32:33.804332592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:32:33.804684 containerd[1709]: time="2024-12-13T13:32:33.804353592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:32:33.804684 containerd[1709]: time="2024-12-13T13:32:33.804594698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:32:33.828984 systemd[1]: Started cri-containerd-18975e33f0789fa423fdd6b3d9342c61e6f1b122360ae2b7bbaa275a48c132c5.scope - libcontainer container 18975e33f0789fa423fdd6b3d9342c61e6f1b122360ae2b7bbaa275a48c132c5. Dec 13 13:32:33.871689 containerd[1709]: time="2024-12-13T13:32:33.871630227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5df4c8bf5d-gpb4p,Uid:2a3855dc-76af-408f-adae-de792e194c23,Namespace:calico-system,Attempt:0,} returns sandbox id \"18975e33f0789fa423fdd6b3d9342c61e6f1b122360ae2b7bbaa275a48c132c5\"" Dec 13 13:32:35.726132 kubelet[3413]: E1213 13:32:35.725700 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l7zsr" podUID="5e38b74a-209a-4cd3-be7c-117000f59938" Dec 13 13:32:36.296152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount570038961.mount: Deactivated successfully. Dec 13 13:32:36.906340 containerd[1709]: time="2024-12-13T13:32:36.906279555Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:37.640119 containerd[1709]: time="2024-12-13T13:32:37.640023580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Dec 13 13:32:37.727359 kubelet[3413]: E1213 13:32:37.725576 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l7zsr" podUID="5e38b74a-209a-4cd3-be7c-117000f59938" Dec 13 13:32:38.743062 containerd[1709]: time="2024-12-13T13:32:38.742096056Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:38.747692 containerd[1709]: time="2024-12-13T13:32:38.747642603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:38.748455 containerd[1709]: time="2024-12-13T13:32:38.748414123Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 5.36870648s" Dec 13 13:32:38.748602 containerd[1709]: time="2024-12-13T13:32:38.748580728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 13:32:38.749683 containerd[1709]: time="2024-12-13T13:32:38.749658056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 13:32:38.751929 containerd[1709]: time="2024-12-13T13:32:38.751897316Z" level=info msg="CreateContainer within sandbox \"71bb6a202b3ea3dfecad0e4e7900674bbbab9454d27304ecdd8a01002789e33f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 13:32:38.796236 containerd[1709]: time="2024-12-13T13:32:38.796181788Z" level=info msg="CreateContainer within sandbox \"71bb6a202b3ea3dfecad0e4e7900674bbbab9454d27304ecdd8a01002789e33f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"89dc160e7bf503886e1bd8d2f8384d69f0175ed09e3b83a9394ce539c1a69fcf\"" Dec 13 13:32:38.797120 containerd[1709]: time="2024-12-13T13:32:38.797054211Z" level=info msg="StartContainer for \"89dc160e7bf503886e1bd8d2f8384d69f0175ed09e3b83a9394ce539c1a69fcf\"" Dec 13 13:32:38.836927 systemd[1]: Started cri-containerd-89dc160e7bf503886e1bd8d2f8384d69f0175ed09e3b83a9394ce539c1a69fcf.scope - libcontainer container 89dc160e7bf503886e1bd8d2f8384d69f0175ed09e3b83a9394ce539c1a69fcf. Dec 13 13:32:38.884638 containerd[1709]: time="2024-12-13T13:32:38.883531300Z" level=info msg="StartContainer for \"89dc160e7bf503886e1bd8d2f8384d69f0175ed09e3b83a9394ce539c1a69fcf\" returns successfully" Dec 13 13:32:38.890371 systemd[1]: cri-containerd-89dc160e7bf503886e1bd8d2f8384d69f0175ed09e3b83a9394ce539c1a69fcf.scope: Deactivated successfully. Dec 13 13:32:38.912686 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89dc160e7bf503886e1bd8d2f8384d69f0175ed09e3b83a9394ce539c1a69fcf-rootfs.mount: Deactivated successfully. Dec 13 13:32:39.647688 containerd[1709]: time="2024-12-13T13:32:39.647608428Z" level=info msg="shim disconnected" id=89dc160e7bf503886e1bd8d2f8384d69f0175ed09e3b83a9394ce539c1a69fcf namespace=k8s.io Dec 13 13:32:39.647688 containerd[1709]: time="2024-12-13T13:32:39.647676230Z" level=warning msg="cleaning up after shim disconnected" id=89dc160e7bf503886e1bd8d2f8384d69f0175ed09e3b83a9394ce539c1a69fcf namespace=k8s.io Dec 13 13:32:39.647688 containerd[1709]: time="2024-12-13T13:32:39.647689030Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:32:39.727563 kubelet[3413]: E1213 13:32:39.725851 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l7zsr" podUID="5e38b74a-209a-4cd3-be7c-117000f59938" Dec 13 13:32:41.355557 containerd[1709]: time="2024-12-13T13:32:41.355497843Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:41.358611 containerd[1709]: time="2024-12-13T13:32:41.358546423Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Dec 13 13:32:41.362520 containerd[1709]: time="2024-12-13T13:32:41.362460827Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:41.368693 containerd[1709]: time="2024-12-13T13:32:41.368646891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:41.369441 containerd[1709]: time="2024-12-13T13:32:41.369278607Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.618994934s" Dec 13 13:32:41.369441 containerd[1709]: time="2024-12-13T13:32:41.369321108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 13:32:41.371051 containerd[1709]: time="2024-12-13T13:32:41.370595542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 13:32:41.386793 containerd[1709]: time="2024-12-13T13:32:41.384985523Z" level=info msg="CreateContainer within sandbox \"18975e33f0789fa423fdd6b3d9342c61e6f1b122360ae2b7bbaa275a48c132c5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 13:32:41.439670 containerd[1709]: time="2024-12-13T13:32:41.439455765Z" level=info msg="CreateContainer within sandbox \"18975e33f0789fa423fdd6b3d9342c61e6f1b122360ae2b7bbaa275a48c132c5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fc26ae910a5102019fa4c42b8c033d76c2a297683f8770de7535e6044e8216a9\"" Dec 13 13:32:41.440289 containerd[1709]: time="2024-12-13T13:32:41.440200785Z" level=info msg="StartContainer for \"fc26ae910a5102019fa4c42b8c033d76c2a297683f8770de7535e6044e8216a9\"" Dec 13 13:32:41.473919 systemd[1]: Started cri-containerd-fc26ae910a5102019fa4c42b8c033d76c2a297683f8770de7535e6044e8216a9.scope - libcontainer container fc26ae910a5102019fa4c42b8c033d76c2a297683f8770de7535e6044e8216a9. Dec 13 13:32:41.521194 containerd[1709]: time="2024-12-13T13:32:41.521137828Z" level=info msg="StartContainer for \"fc26ae910a5102019fa4c42b8c033d76c2a297683f8770de7535e6044e8216a9\" returns successfully" Dec 13 13:32:41.726565 kubelet[3413]: E1213 13:32:41.726171 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l7zsr" podUID="5e38b74a-209a-4cd3-be7c-117000f59938" Dec 13 13:32:42.844965 kubelet[3413]: I1213 13:32:42.844928 3413 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 13:32:43.727441 kubelet[3413]: E1213 13:32:43.725949 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l7zsr" podUID="5e38b74a-209a-4cd3-be7c-117000f59938" Dec 13 13:32:45.729531 kubelet[3413]: E1213 13:32:45.727066 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l7zsr" podUID="5e38b74a-209a-4cd3-be7c-117000f59938" Dec 13 13:32:46.348258 containerd[1709]: time="2024-12-13T13:32:46.348181511Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:46.393906 containerd[1709]: time="2024-12-13T13:32:46.393801713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 13:32:46.406452 containerd[1709]: time="2024-12-13T13:32:46.406333243Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:46.442297 containerd[1709]: time="2024-12-13T13:32:46.442180887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:46.443221 containerd[1709]: time="2024-12-13T13:32:46.443175213Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.072541269s" Dec 13 13:32:46.443447 containerd[1709]: time="2024-12-13T13:32:46.443226614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 13:32:46.446535 containerd[1709]: time="2024-12-13T13:32:46.446273994Z" level=info msg="CreateContainer within sandbox \"71bb6a202b3ea3dfecad0e4e7900674bbbab9454d27304ecdd8a01002789e33f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 13:32:46.752800 containerd[1709]: time="2024-12-13T13:32:46.752631462Z" level=info msg="CreateContainer within sandbox \"71bb6a202b3ea3dfecad0e4e7900674bbbab9454d27304ecdd8a01002789e33f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"eaab328ab6c993eebbea49f41be33982490d1145feebeedde00702a603445d60\"" Dec 13 13:32:46.755262 containerd[1709]: time="2024-12-13T13:32:46.753816793Z" level=info msg="StartContainer for \"eaab328ab6c993eebbea49f41be33982490d1145feebeedde00702a603445d60\"" Dec 13 13:32:46.788398 systemd[1]: run-containerd-runc-k8s.io-eaab328ab6c993eebbea49f41be33982490d1145feebeedde00702a603445d60-runc.xsqRNb.mount: Deactivated successfully. Dec 13 13:32:46.796984 systemd[1]: Started cri-containerd-eaab328ab6c993eebbea49f41be33982490d1145feebeedde00702a603445d60.scope - libcontainer container eaab328ab6c993eebbea49f41be33982490d1145feebeedde00702a603445d60. Dec 13 13:32:46.836996 containerd[1709]: time="2024-12-13T13:32:46.836867880Z" level=info msg="StartContainer for \"eaab328ab6c993eebbea49f41be33982490d1145feebeedde00702a603445d60\" returns successfully" Dec 13 13:32:46.869948 kubelet[3413]: I1213 13:32:46.869859 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5df4c8bf5d-gpb4p" podStartSLOduration=8.373764807 podStartE2EDuration="15.869833048s" podCreationTimestamp="2024-12-13 13:32:31 +0000 UTC" firstStartedPulling="2024-12-13 13:32:33.874280095 +0000 UTC m=+28.245417273" lastFinishedPulling="2024-12-13 13:32:41.370348436 +0000 UTC m=+35.741485514" observedRunningTime="2024-12-13 13:32:41.855862689 +0000 UTC m=+36.226999867" watchObservedRunningTime="2024-12-13 13:32:46.869833048 +0000 UTC m=+41.240970226" Dec 13 13:32:47.727265 kubelet[3413]: E1213 13:32:47.725494 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l7zsr" podUID="5e38b74a-209a-4cd3-be7c-117000f59938" Dec 13 13:32:48.147667 kubelet[3413]: I1213 13:32:48.147342 3413 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 13:32:49.727136 kubelet[3413]: E1213 13:32:49.726935 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l7zsr" podUID="5e38b74a-209a-4cd3-be7c-117000f59938" Dec 13 13:32:51.726722 kubelet[3413]: E1213 13:32:51.726208 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l7zsr" podUID="5e38b74a-209a-4cd3-be7c-117000f59938" Dec 13 13:32:53.133159 systemd[1]: cri-containerd-eaab328ab6c993eebbea49f41be33982490d1145feebeedde00702a603445d60.scope: Deactivated successfully. Dec 13 13:32:53.156371 kubelet[3413]: I1213 13:32:53.156046 3413 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 13:32:53.159480 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eaab328ab6c993eebbea49f41be33982490d1145feebeedde00702a603445d60-rootfs.mount: Deactivated successfully. Dec 13 13:32:53.198085 kubelet[3413]: I1213 13:32:53.196760 3413 topology_manager.go:215] "Topology Admit Handler" podUID="0c845263-e633-4055-81f9-4aa28ad32b74" podNamespace="calico-system" podName="calico-kube-controllers-5cbfd9d889-rxm9h" Dec 13 13:32:53.204549 kubelet[3413]: I1213 13:32:53.204504 3413 topology_manager.go:215] "Topology Admit Handler" podUID="3098ed4c-c400-4c97-958d-d1930afff8ed" podNamespace="kube-system" podName="coredns-7db6d8ff4d-88gf8" Dec 13 13:32:53.204807 kubelet[3413]: I1213 13:32:53.204688 3413 topology_manager.go:215] "Topology Admit Handler" podUID="e90719d9-2bf9-4651-a5b8-e332bf6846fe" podNamespace="calico-apiserver" podName="calico-apiserver-c5f78578d-dhdkn" Dec 13 13:32:53.211780 systemd[1]: Created slice kubepods-besteffort-pod0c845263_e633_4055_81f9_4aa28ad32b74.slice - libcontainer container kubepods-besteffort-pod0c845263_e633_4055_81f9_4aa28ad32b74.slice. Dec 13 13:32:53.215559 kubelet[3413]: I1213 13:32:53.215252 3413 topology_manager.go:215] "Topology Admit Handler" podUID="4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef" podNamespace="calico-apiserver" podName="calico-apiserver-c5f78578d-x98kx" Dec 13 13:32:53.215559 kubelet[3413]: I1213 13:32:53.215546 3413 topology_manager.go:215] "Topology Admit Handler" podUID="d48882ed-a3fb-4cc6-a051-3acab30e260b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nxst4" Dec 13 13:32:53.217961 kubelet[3413]: W1213 13:32:53.217848 3413 reflector.go:547] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4186.0.0-a-6a956dd616" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4186.0.0-a-6a956dd616' and this object Dec 13 13:32:53.217961 kubelet[3413]: E1213 13:32:53.217919 3413 reflector.go:150] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4186.0.0-a-6a956dd616" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4186.0.0-a-6a956dd616' and this object Dec 13 13:32:53.218851 kubelet[3413]: W1213 13:32:53.218327 3413 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4186.0.0-a-6a956dd616" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186.0.0-a-6a956dd616' and this object Dec 13 13:32:53.218851 kubelet[3413]: E1213 13:32:53.218354 3413 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4186.0.0-a-6a956dd616" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186.0.0-a-6a956dd616' and this object Dec 13 13:32:53.218851 kubelet[3413]: W1213 13:32:53.218418 3413 reflector.go:547] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4186.0.0-a-6a956dd616" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4186.0.0-a-6a956dd616' and this object Dec 13 13:32:53.218851 kubelet[3413]: E1213 13:32:53.218432 3413 reflector.go:150] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4186.0.0-a-6a956dd616" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4186.0.0-a-6a956dd616' and this object Dec 13 13:32:53.232907 systemd[1]: Created slice kubepods-burstable-pod3098ed4c_c400_4c97_958d_d1930afff8ed.slice - libcontainer container kubepods-burstable-pod3098ed4c_c400_4c97_958d_d1930afff8ed.slice. Dec 13 13:32:53.239387 kubelet[3413]: I1213 13:32:53.239181 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69tkp\" (UniqueName: \"kubernetes.io/projected/0c845263-e633-4055-81f9-4aa28ad32b74-kube-api-access-69tkp\") pod \"calico-kube-controllers-5cbfd9d889-rxm9h\" (UID: \"0c845263-e633-4055-81f9-4aa28ad32b74\") " pod="calico-system/calico-kube-controllers-5cbfd9d889-rxm9h" Dec 13 13:32:53.239387 kubelet[3413]: I1213 13:32:53.239274 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c845263-e633-4055-81f9-4aa28ad32b74-tigera-ca-bundle\") pod \"calico-kube-controllers-5cbfd9d889-rxm9h\" (UID: \"0c845263-e633-4055-81f9-4aa28ad32b74\") " pod="calico-system/calico-kube-controllers-5cbfd9d889-rxm9h" Dec 13 13:32:53.241132 systemd[1]: Created slice kubepods-besteffort-pode90719d9_2bf9_4651_a5b8_e332bf6846fe.slice - libcontainer container kubepods-besteffort-pode90719d9_2bf9_4651_a5b8_e332bf6846fe.slice. Dec 13 13:32:53.251523 systemd[1]: Created slice kubepods-burstable-podd48882ed_a3fb_4cc6_a051_3acab30e260b.slice - libcontainer container kubepods-burstable-podd48882ed_a3fb_4cc6_a051_3acab30e260b.slice. Dec 13 13:32:53.259905 systemd[1]: Created slice kubepods-besteffort-pod4966e6b9_08e9_49c8_9ca0_aea5b2c4ccef.slice - libcontainer container kubepods-besteffort-pod4966e6b9_08e9_49c8_9ca0_aea5b2c4ccef.slice. Dec 13 13:32:53.340438 kubelet[3413]: I1213 13:32:53.340370 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nl42\" (UniqueName: \"kubernetes.io/projected/d48882ed-a3fb-4cc6-a051-3acab30e260b-kube-api-access-9nl42\") pod \"coredns-7db6d8ff4d-nxst4\" (UID: \"d48882ed-a3fb-4cc6-a051-3acab30e260b\") " pod="kube-system/coredns-7db6d8ff4d-nxst4" Dec 13 13:32:53.340438 kubelet[3413]: I1213 13:32:53.340435 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d48882ed-a3fb-4cc6-a051-3acab30e260b-config-volume\") pod \"coredns-7db6d8ff4d-nxst4\" (UID: \"d48882ed-a3fb-4cc6-a051-3acab30e260b\") " pod="kube-system/coredns-7db6d8ff4d-nxst4" Dec 13 13:32:53.340718 kubelet[3413]: I1213 13:32:53.340492 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3098ed4c-c400-4c97-958d-d1930afff8ed-config-volume\") pod \"coredns-7db6d8ff4d-88gf8\" (UID: \"3098ed4c-c400-4c97-958d-d1930afff8ed\") " pod="kube-system/coredns-7db6d8ff4d-88gf8" Dec 13 13:32:53.340718 kubelet[3413]: I1213 13:32:53.340520 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e90719d9-2bf9-4651-a5b8-e332bf6846fe-calico-apiserver-certs\") pod \"calico-apiserver-c5f78578d-dhdkn\" (UID: \"e90719d9-2bf9-4651-a5b8-e332bf6846fe\") " pod="calico-apiserver/calico-apiserver-c5f78578d-dhdkn" Dec 13 13:32:53.340718 kubelet[3413]: I1213 13:32:53.340550 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef-calico-apiserver-certs\") pod \"calico-apiserver-c5f78578d-x98kx\" (UID: \"4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef\") " pod="calico-apiserver/calico-apiserver-c5f78578d-x98kx" Dec 13 13:32:53.340718 kubelet[3413]: I1213 13:32:53.340585 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k62bk\" (UniqueName: \"kubernetes.io/projected/4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef-kube-api-access-k62bk\") pod \"calico-apiserver-c5f78578d-x98kx\" (UID: \"4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef\") " pod="calico-apiserver/calico-apiserver-c5f78578d-x98kx" Dec 13 13:32:53.340718 kubelet[3413]: I1213 13:32:53.340637 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq2m4\" (UniqueName: \"kubernetes.io/projected/3098ed4c-c400-4c97-958d-d1930afff8ed-kube-api-access-pq2m4\") pod \"coredns-7db6d8ff4d-88gf8\" (UID: \"3098ed4c-c400-4c97-958d-d1930afff8ed\") " pod="kube-system/coredns-7db6d8ff4d-88gf8" Dec 13 13:32:53.341020 kubelet[3413]: I1213 13:32:53.340663 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs4ll\" (UniqueName: \"kubernetes.io/projected/e90719d9-2bf9-4651-a5b8-e332bf6846fe-kube-api-access-bs4ll\") pod \"calico-apiserver-c5f78578d-dhdkn\" (UID: \"e90719d9-2bf9-4651-a5b8-e332bf6846fe\") " pod="calico-apiserver/calico-apiserver-c5f78578d-dhdkn" Dec 13 13:32:53.527654 containerd[1709]: time="2024-12-13T13:32:53.527604072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cbfd9d889-rxm9h,Uid:0c845263-e633-4055-81f9-4aa28ad32b74,Namespace:calico-system,Attempt:0,}" Dec 13 13:32:53.732716 systemd[1]: Created slice kubepods-besteffort-pod5e38b74a_209a_4cd3_be7c_117000f59938.slice - libcontainer container kubepods-besteffort-pod5e38b74a_209a_4cd3_be7c_117000f59938.slice. Dec 13 13:32:57.345273 kubelet[3413]: E1213 13:32:54.441949 3413 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Dec 13 13:32:57.345273 kubelet[3413]: E1213 13:32:54.441986 3413 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Dec 13 13:32:57.345273 kubelet[3413]: E1213 13:32:54.442069 3413 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef-calico-apiserver-certs podName:4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef nodeName:}" failed. No retries permitted until 2024-12-13 13:32:54.942039387 +0000 UTC m=+49.313176465 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef-calico-apiserver-certs") pod "calico-apiserver-c5f78578d-x98kx" (UID: "4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef") : failed to sync secret cache: timed out waiting for the condition Dec 13 13:32:57.345273 kubelet[3413]: E1213 13:32:54.442091 3413 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d48882ed-a3fb-4cc6-a051-3acab30e260b-config-volume podName:d48882ed-a3fb-4cc6-a051-3acab30e260b nodeName:}" failed. No retries permitted until 2024-12-13 13:32:54.942081288 +0000 UTC m=+49.313218366 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d48882ed-a3fb-4cc6-a051-3acab30e260b-config-volume") pod "coredns-7db6d8ff4d-nxst4" (UID: "d48882ed-a3fb-4cc6-a051-3acab30e260b") : failed to sync configmap cache: timed out waiting for the condition Dec 13 13:32:57.345273 kubelet[3413]: E1213 13:32:54.441947 3413 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Dec 13 13:32:57.354192 containerd[1709]: time="2024-12-13T13:32:57.344054818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7zsr,Uid:5e38b74a-209a-4cd3-be7c-117000f59938,Namespace:calico-system,Attempt:0,}" Dec 13 13:32:57.354520 kubelet[3413]: E1213 13:32:54.442137 3413 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e90719d9-2bf9-4651-a5b8-e332bf6846fe-calico-apiserver-certs podName:e90719d9-2bf9-4651-a5b8-e332bf6846fe nodeName:}" failed. No retries permitted until 2024-12-13 13:32:54.942128489 +0000 UTC m=+49.313265567 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/e90719d9-2bf9-4651-a5b8-e332bf6846fe-calico-apiserver-certs") pod "calico-apiserver-c5f78578d-dhdkn" (UID: "e90719d9-2bf9-4651-a5b8-e332bf6846fe") : failed to sync secret cache: timed out waiting for the condition Dec 13 13:32:57.354520 kubelet[3413]: E1213 13:32:54.441972 3413 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Dec 13 13:32:57.354520 kubelet[3413]: E1213 13:32:54.442193 3413 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3098ed4c-c400-4c97-958d-d1930afff8ed-config-volume podName:3098ed4c-c400-4c97-958d-d1930afff8ed nodeName:}" failed. No retries permitted until 2024-12-13 13:32:54.942184391 +0000 UTC m=+49.313321469 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3098ed4c-c400-4c97-958d-d1930afff8ed-config-volume") pod "coredns-7db6d8ff4d-88gf8" (UID: "3098ed4c-c400-4c97-958d-d1930afff8ed") : failed to sync configmap cache: timed out waiting for the condition Dec 13 13:32:57.354520 kubelet[3413]: E1213 13:32:57.342172 3413 kubelet.go:2511] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.616s" Dec 13 13:32:57.437301 containerd[1709]: time="2024-12-13T13:32:57.437244176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-88gf8,Uid:3098ed4c-c400-4c97-958d-d1930afff8ed,Namespace:kube-system,Attempt:0,}" Dec 13 13:32:57.449798 containerd[1709]: time="2024-12-13T13:32:57.449716805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5f78578d-dhdkn,Uid:e90719d9-2bf9-4651-a5b8-e332bf6846fe,Namespace:calico-apiserver,Attempt:0,}" Dec 13 13:32:57.455465 containerd[1709]: time="2024-12-13T13:32:57.455412555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxst4,Uid:d48882ed-a3fb-4cc6-a051-3acab30e260b,Namespace:kube-system,Attempt:0,}" Dec 13 13:32:57.464075 containerd[1709]: time="2024-12-13T13:32:57.464016882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5f78578d-x98kx,Uid:4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef,Namespace:calico-apiserver,Attempt:0,}" Dec 13 13:33:02.524199 containerd[1709]: time="2024-12-13T13:33:02.524106497Z" level=info msg="shim disconnected" id=eaab328ab6c993eebbea49f41be33982490d1145feebeedde00702a603445d60 namespace=k8s.io Dec 13 13:33:02.524199 containerd[1709]: time="2024-12-13T13:33:02.524187999Z" level=warning msg="cleaning up after shim disconnected" id=eaab328ab6c993eebbea49f41be33982490d1145feebeedde00702a603445d60 namespace=k8s.io Dec 13 13:33:02.524199 containerd[1709]: time="2024-12-13T13:33:02.524199700Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:33:02.893713 containerd[1709]: time="2024-12-13T13:33:02.893151027Z" level=error msg="Failed to destroy network for sandbox \"0faa43f2ad8e70510b7c5990679c2810ce6aa6b8b9fc1bbc25e194e37c5b0aaa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:02.894133 containerd[1709]: time="2024-12-13T13:33:02.894080051Z" level=error msg="encountered an error cleaning up failed sandbox \"0faa43f2ad8e70510b7c5990679c2810ce6aa6b8b9fc1bbc25e194e37c5b0aaa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:02.894238 containerd[1709]: time="2024-12-13T13:33:02.894188854Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cbfd9d889-rxm9h,Uid:0c845263-e633-4055-81f9-4aa28ad32b74,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0faa43f2ad8e70510b7c5990679c2810ce6aa6b8b9fc1bbc25e194e37c5b0aaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:02.896257 kubelet[3413]: E1213 13:33:02.894731 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0faa43f2ad8e70510b7c5990679c2810ce6aa6b8b9fc1bbc25e194e37c5b0aaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:02.896257 kubelet[3413]: E1213 13:33:02.894838 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0faa43f2ad8e70510b7c5990679c2810ce6aa6b8b9fc1bbc25e194e37c5b0aaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cbfd9d889-rxm9h" Dec 13 13:33:02.896257 kubelet[3413]: E1213 13:33:02.894864 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0faa43f2ad8e70510b7c5990679c2810ce6aa6b8b9fc1bbc25e194e37c5b0aaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cbfd9d889-rxm9h" Dec 13 13:33:02.897450 kubelet[3413]: E1213 13:33:02.894918 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5cbfd9d889-rxm9h_calico-system(0c845263-e633-4055-81f9-4aa28ad32b74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5cbfd9d889-rxm9h_calico-system(0c845263-e633-4055-81f9-4aa28ad32b74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0faa43f2ad8e70510b7c5990679c2810ce6aa6b8b9fc1bbc25e194e37c5b0aaa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5cbfd9d889-rxm9h" podUID="0c845263-e633-4055-81f9-4aa28ad32b74" Dec 13 13:33:02.922021 containerd[1709]: time="2024-12-13T13:33:02.921971787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 13:33:02.929663 containerd[1709]: time="2024-12-13T13:33:02.927167324Z" level=error msg="Failed to destroy network for sandbox \"512be05e9151ae9a584b66e39d1ec0802ac268d4aa921c7e1a4e6148d156309a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:02.931110 containerd[1709]: time="2024-12-13T13:33:02.931046926Z" level=error msg="encountered an error cleaning up failed sandbox \"512be05e9151ae9a584b66e39d1ec0802ac268d4aa921c7e1a4e6148d156309a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:02.931247 containerd[1709]: time="2024-12-13T13:33:02.931177829Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxst4,Uid:d48882ed-a3fb-4cc6-a051-3acab30e260b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"512be05e9151ae9a584b66e39d1ec0802ac268d4aa921c7e1a4e6148d156309a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:02.931813 kubelet[3413]: E1213 13:33:02.931426 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"512be05e9151ae9a584b66e39d1ec0802ac268d4aa921c7e1a4e6148d156309a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:02.931813 kubelet[3413]: E1213 13:33:02.931515 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"512be05e9151ae9a584b66e39d1ec0802ac268d4aa921c7e1a4e6148d156309a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nxst4" Dec 13 13:33:02.931813 kubelet[3413]: E1213 13:33:02.931540 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"512be05e9151ae9a584b66e39d1ec0802ac268d4aa921c7e1a4e6148d156309a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nxst4" Dec 13 13:33:02.932619 kubelet[3413]: E1213 13:33:02.931591 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nxst4_kube-system(d48882ed-a3fb-4cc6-a051-3acab30e260b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nxst4_kube-system(d48882ed-a3fb-4cc6-a051-3acab30e260b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"512be05e9151ae9a584b66e39d1ec0802ac268d4aa921c7e1a4e6148d156309a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nxst4" podUID="d48882ed-a3fb-4cc6-a051-3acab30e260b" Dec 13 13:33:03.002716 containerd[1709]: time="2024-12-13T13:33:03.002537111Z" level=error msg="Failed to destroy network for sandbox \"f9184b532475922b9100a5c938652161842880f337124073da61e386b2f345dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:03.003693 containerd[1709]: time="2024-12-13T13:33:03.003637340Z" level=error msg="encountered an error cleaning up failed sandbox \"f9184b532475922b9100a5c938652161842880f337124073da61e386b2f345dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:03.003970 containerd[1709]: time="2024-12-13T13:33:03.003931547Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5f78578d-dhdkn,Uid:e90719d9-2bf9-4651-a5b8-e332bf6846fe,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f9184b532475922b9100a5c938652161842880f337124073da61e386b2f345dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:03.004769 kubelet[3413]: E1213 13:33:03.004438 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9184b532475922b9100a5c938652161842880f337124073da61e386b2f345dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:03.004769 kubelet[3413]: E1213 13:33:03.004536 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9184b532475922b9100a5c938652161842880f337124073da61e386b2f345dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5f78578d-dhdkn" Dec 13 13:33:03.004769 kubelet[3413]: E1213 13:33:03.004567 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9184b532475922b9100a5c938652161842880f337124073da61e386b2f345dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5f78578d-dhdkn" Dec 13 13:33:03.004980 kubelet[3413]: E1213 13:33:03.004633 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c5f78578d-dhdkn_calico-apiserver(e90719d9-2bf9-4651-a5b8-e332bf6846fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c5f78578d-dhdkn_calico-apiserver(e90719d9-2bf9-4651-a5b8-e332bf6846fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9184b532475922b9100a5c938652161842880f337124073da61e386b2f345dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5f78578d-dhdkn" podUID="e90719d9-2bf9-4651-a5b8-e332bf6846fe" Dec 13 13:33:03.019811 containerd[1709]: time="2024-12-13T13:33:03.018963444Z" level=error msg="Failed to destroy network for sandbox \"9c9b8726975afc5c77096e93439aa815b4e726aa1d5d7677f13b81a0a9aac4a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:03.019811 containerd[1709]: time="2024-12-13T13:33:03.019465857Z" level=error msg="encountered an error cleaning up failed sandbox \"9c9b8726975afc5c77096e93439aa815b4e726aa1d5d7677f13b81a0a9aac4a2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:03.019811 containerd[1709]: time="2024-12-13T13:33:03.019567360Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5f78578d-x98kx,Uid:4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9c9b8726975afc5c77096e93439aa815b4e726aa1d5d7677f13b81a0a9aac4a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:03.020288 kubelet[3413]: E1213 13:33:03.020243 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c9b8726975afc5c77096e93439aa815b4e726aa1d5d7677f13b81a0a9aac4a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:03.020433 kubelet[3413]: E1213 13:33:03.020320 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c9b8726975afc5c77096e93439aa815b4e726aa1d5d7677f13b81a0a9aac4a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5f78578d-x98kx" Dec 13 13:33:03.020433 kubelet[3413]: E1213 13:33:03.020360 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c9b8726975afc5c77096e93439aa815b4e726aa1d5d7677f13b81a0a9aac4a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5f78578d-x98kx" Dec 13 13:33:03.022266 kubelet[3413]: E1213 13:33:03.020434 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c5f78578d-x98kx_calico-apiserver(4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c5f78578d-x98kx_calico-apiserver(4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c9b8726975afc5c77096e93439aa815b4e726aa1d5d7677f13b81a0a9aac4a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5f78578d-x98kx" podUID="4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef" Dec 13 13:33:03.026958 containerd[1709]: time="2024-12-13T13:33:03.026715948Z" level=error msg="Failed to destroy network for sandbox \"c9c43a7c6b28804803fc3a5917d419eca74f6d1cb9caefbf976a951c519e132c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:03.029864 containerd[1709]: time="2024-12-13T13:33:03.029687926Z" level=error msg="encountered an error cleaning up failed sandbox \"c9c43a7c6b28804803fc3a5917d419eca74f6d1cb9caefbf976a951c519e132c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:03.030241 containerd[1709]: time="2024-12-13T13:33:03.030004635Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-88gf8,Uid:3098ed4c-c400-4c97-958d-d1930afff8ed,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c9c43a7c6b28804803fc3a5917d419eca74f6d1cb9caefbf976a951c519e132c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:03.030863 kubelet[3413]: E1213 13:33:03.030811 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9c43a7c6b28804803fc3a5917d419eca74f6d1cb9caefbf976a951c519e132c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:03.031072 kubelet[3413]: E1213 13:33:03.030895 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9c43a7c6b28804803fc3a5917d419eca74f6d1cb9caefbf976a951c519e132c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-88gf8" Dec 13 13:33:03.031072 kubelet[3413]: E1213 13:33:03.030923 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9c43a7c6b28804803fc3a5917d419eca74f6d1cb9caefbf976a951c519e132c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-88gf8" Dec 13 13:33:03.031764 kubelet[3413]: E1213 13:33:03.031678 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-88gf8_kube-system(3098ed4c-c400-4c97-958d-d1930afff8ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-88gf8_kube-system(3098ed4c-c400-4c97-958d-d1930afff8ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9c43a7c6b28804803fc3a5917d419eca74f6d1cb9caefbf976a951c519e132c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-88gf8" podUID="3098ed4c-c400-4c97-958d-d1930afff8ed" Dec 13 13:33:03.038551 containerd[1709]: time="2024-12-13T13:33:03.038497059Z" level=error msg="Failed to destroy network for sandbox \"23e471ac3a3f93ff45fac7d008bd5a3552121031429c7a37d8f5380dc12dd443\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:03.038934 containerd[1709]: time="2024-12-13T13:33:03.038901369Z" level=error msg="encountered an error cleaning up failed sandbox \"23e471ac3a3f93ff45fac7d008bd5a3552121031429c7a37d8f5380dc12dd443\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:03.039035 containerd[1709]: time="2024-12-13T13:33:03.038984472Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7zsr,Uid:5e38b74a-209a-4cd3-be7c-117000f59938,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"23e471ac3a3f93ff45fac7d008bd5a3552121031429c7a37d8f5380dc12dd443\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:03.039304 kubelet[3413]: E1213 13:33:03.039263 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23e471ac3a3f93ff45fac7d008bd5a3552121031429c7a37d8f5380dc12dd443\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:03.039408 kubelet[3413]: E1213 13:33:03.039328 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23e471ac3a3f93ff45fac7d008bd5a3552121031429c7a37d8f5380dc12dd443\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7zsr" Dec 13 13:33:03.039408 kubelet[3413]: E1213 13:33:03.039357 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23e471ac3a3f93ff45fac7d008bd5a3552121031429c7a37d8f5380dc12dd443\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7zsr" Dec 13 13:33:03.039679 kubelet[3413]: E1213 13:33:03.039432 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l7zsr_calico-system(5e38b74a-209a-4cd3-be7c-117000f59938)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l7zsr_calico-system(5e38b74a-209a-4cd3-be7c-117000f59938)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23e471ac3a3f93ff45fac7d008bd5a3552121031429c7a37d8f5380dc12dd443\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l7zsr" podUID="5e38b74a-209a-4cd3-be7c-117000f59938" Dec 13 13:33:03.635532 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c9c43a7c6b28804803fc3a5917d419eca74f6d1cb9caefbf976a951c519e132c-shm.mount: Deactivated successfully. Dec 13 13:33:03.636177 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-23e471ac3a3f93ff45fac7d008bd5a3552121031429c7a37d8f5380dc12dd443-shm.mount: Deactivated successfully. Dec 13 13:33:03.636394 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0faa43f2ad8e70510b7c5990679c2810ce6aa6b8b9fc1bbc25e194e37c5b0aaa-shm.mount: Deactivated successfully. Dec 13 13:33:03.920155 kubelet[3413]: I1213 13:33:03.919200 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c9b8726975afc5c77096e93439aa815b4e726aa1d5d7677f13b81a0a9aac4a2" Dec 13 13:33:03.922411 containerd[1709]: time="2024-12-13T13:33:03.921942850Z" level=info msg="StopPodSandbox for \"9c9b8726975afc5c77096e93439aa815b4e726aa1d5d7677f13b81a0a9aac4a2\"" Dec 13 13:33:03.922411 containerd[1709]: time="2024-12-13T13:33:03.922217257Z" level=info msg="Ensure that sandbox 9c9b8726975afc5c77096e93439aa815b4e726aa1d5d7677f13b81a0a9aac4a2 in task-service has been cleanup successfully" Dec 13 13:33:03.922879 kubelet[3413]: I1213 13:33:03.922295 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9c43a7c6b28804803fc3a5917d419eca74f6d1cb9caefbf976a951c519e132c" Dec 13 13:33:03.927778 containerd[1709]: time="2024-12-13T13:33:03.922989678Z" level=info msg="StopPodSandbox for \"c9c43a7c6b28804803fc3a5917d419eca74f6d1cb9caefbf976a951c519e132c\"" Dec 13 13:33:03.927778 containerd[1709]: time="2024-12-13T13:33:03.923218384Z" level=info msg="Ensure that sandbox c9c43a7c6b28804803fc3a5917d419eca74f6d1cb9caefbf976a951c519e132c in task-service has been cleanup successfully" Dec 13 13:33:03.927999 kubelet[3413]: I1213 13:33:03.926908 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0faa43f2ad8e70510b7c5990679c2810ce6aa6b8b9fc1bbc25e194e37c5b0aaa" Dec 13 13:33:03.928302 containerd[1709]: time="2024-12-13T13:33:03.928126913Z" level=info msg="TearDown network for sandbox \"9c9b8726975afc5c77096e93439aa815b4e726aa1d5d7677f13b81a0a9aac4a2\" successfully" Dec 13 13:33:03.928302 containerd[1709]: time="2024-12-13T13:33:03.928153914Z" level=info msg="StopPodSandbox for \"9c9b8726975afc5c77096e93439aa815b4e726aa1d5d7677f13b81a0a9aac4a2\" returns successfully" Dec 13 13:33:03.928840 containerd[1709]: time="2024-12-13T13:33:03.928810331Z" level=info msg="TearDown network for sandbox \"c9c43a7c6b28804803fc3a5917d419eca74f6d1cb9caefbf976a951c519e132c\" successfully" Dec 13 13:33:03.928840 containerd[1709]: time="2024-12-13T13:33:03.928837632Z" level=info msg="StopPodSandbox for \"c9c43a7c6b28804803fc3a5917d419eca74f6d1cb9caefbf976a951c519e132c\" returns successfully" Dec 13 13:33:03.930526 systemd[1]: run-netns-cni\x2db77e8c39\x2d98da\x2d0dc8\x2d9a1d\x2d7c08f2b33ed3.mount: Deactivated successfully. Dec 13 13:33:03.930860 systemd[1]: run-netns-cni\x2d91e49095\x2db2fe\x2db37a\x2d2d95\x2d0f2b2f456231.mount: Deactivated successfully. Dec 13 13:33:03.932937 containerd[1709]: time="2024-12-13T13:33:03.931310297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-88gf8,Uid:3098ed4c-c400-4c97-958d-d1930afff8ed,Namespace:kube-system,Attempt:1,}" Dec 13 13:33:03.934947 containerd[1709]: time="2024-12-13T13:33:03.934917092Z" level=info msg="StopPodSandbox for \"0faa43f2ad8e70510b7c5990679c2810ce6aa6b8b9fc1bbc25e194e37c5b0aaa\"" Dec 13 13:33:03.935205 containerd[1709]: time="2024-12-13T13:33:03.935178999Z" level=info msg="Ensure that sandbox 0faa43f2ad8e70510b7c5990679c2810ce6aa6b8b9fc1bbc25e194e37c5b0aaa in task-service has been cleanup successfully" Dec 13 13:33:03.936663 containerd[1709]: time="2024-12-13T13:33:03.936628037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5f78578d-x98kx,Uid:4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef,Namespace:calico-apiserver,Attempt:1,}" Dec 13 13:33:03.938495 kubelet[3413]: I1213 13:33:03.938467 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="512be05e9151ae9a584b66e39d1ec0802ac268d4aa921c7e1a4e6148d156309a" Dec 13 13:33:03.939502 containerd[1709]: time="2024-12-13T13:33:03.939149804Z" level=info msg="StopPodSandbox for \"512be05e9151ae9a584b66e39d1ec0802ac268d4aa921c7e1a4e6148d156309a\"" Dec 13 13:33:03.940159 containerd[1709]: time="2024-12-13T13:33:03.940136730Z" level=info msg="Ensure that sandbox 512be05e9151ae9a584b66e39d1ec0802ac268d4aa921c7e1a4e6148d156309a in task-service has been cleanup successfully" Dec 13 13:33:03.940186 systemd[1]: run-netns-cni\x2d2dada305\x2dc2dc\x2dfd3a\x2d364d\x2d461f29a3ec89.mount: Deactivated successfully. Dec 13 13:33:03.943560 containerd[1709]: time="2024-12-13T13:33:03.943399216Z" level=info msg="TearDown network for sandbox \"512be05e9151ae9a584b66e39d1ec0802ac268d4aa921c7e1a4e6148d156309a\" successfully" Dec 13 13:33:03.943560 containerd[1709]: time="2024-12-13T13:33:03.943425116Z" level=info msg="StopPodSandbox for \"512be05e9151ae9a584b66e39d1ec0802ac268d4aa921c7e1a4e6148d156309a\" returns successfully" Dec 13 13:33:03.946044 containerd[1709]: time="2024-12-13T13:33:03.945699176Z" level=info msg="TearDown network for sandbox \"0faa43f2ad8e70510b7c5990679c2810ce6aa6b8b9fc1bbc25e194e37c5b0aaa\" successfully" Dec 13 13:33:03.946044 containerd[1709]: time="2024-12-13T13:33:03.945725077Z" level=info msg="StopPodSandbox for \"0faa43f2ad8e70510b7c5990679c2810ce6aa6b8b9fc1bbc25e194e37c5b0aaa\" returns successfully" Dec 13 13:33:03.947031 containerd[1709]: time="2024-12-13T13:33:03.947001311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cbfd9d889-rxm9h,Uid:0c845263-e633-4055-81f9-4aa28ad32b74,Namespace:calico-system,Attempt:1,}" Dec 13 13:33:03.947535 kubelet[3413]: I1213 13:33:03.947497 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9184b532475922b9100a5c938652161842880f337124073da61e386b2f345dc" Dec 13 13:33:03.948306 systemd[1]: run-netns-cni\x2de1253f63\x2d1c04\x2dea16\x2d4d7e\x2d0bed07427fdf.mount: Deactivated successfully. Dec 13 13:33:03.950928 containerd[1709]: time="2024-12-13T13:33:03.949455575Z" level=info msg="StopPodSandbox for \"f9184b532475922b9100a5c938652161842880f337124073da61e386b2f345dc\"" Dec 13 13:33:03.950928 containerd[1709]: time="2024-12-13T13:33:03.949714182Z" level=info msg="Ensure that sandbox f9184b532475922b9100a5c938652161842880f337124073da61e386b2f345dc in task-service has been cleanup successfully" Dec 13 13:33:03.950928 containerd[1709]: time="2024-12-13T13:33:03.948594153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxst4,Uid:d48882ed-a3fb-4cc6-a051-3acab30e260b,Namespace:kube-system,Attempt:1,}" Dec 13 13:33:03.951670 kubelet[3413]: I1213 13:33:03.951522 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23e471ac3a3f93ff45fac7d008bd5a3552121031429c7a37d8f5380dc12dd443" Dec 13 13:33:03.952940 containerd[1709]: time="2024-12-13T13:33:03.952874165Z" level=info msg="StopPodSandbox for \"23e471ac3a3f93ff45fac7d008bd5a3552121031429c7a37d8f5380dc12dd443\"" Dec 13 13:33:03.953125 containerd[1709]: time="2024-12-13T13:33:03.953023369Z" level=info msg="TearDown network for sandbox \"f9184b532475922b9100a5c938652161842880f337124073da61e386b2f345dc\" successfully" Dec 13 13:33:03.953125 containerd[1709]: time="2024-12-13T13:33:03.953045070Z" level=info msg="StopPodSandbox for \"f9184b532475922b9100a5c938652161842880f337124073da61e386b2f345dc\" returns successfully" Dec 13 13:33:03.953955 containerd[1709]: time="2024-12-13T13:33:03.953828591Z" level=info msg="Ensure that sandbox 23e471ac3a3f93ff45fac7d008bd5a3552121031429c7a37d8f5380dc12dd443 in task-service has been cleanup successfully" Dec 13 13:33:03.955612 containerd[1709]: time="2024-12-13T13:33:03.954843717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5f78578d-dhdkn,Uid:e90719d9-2bf9-4651-a5b8-e332bf6846fe,Namespace:calico-apiserver,Attempt:1,}" Dec 13 13:33:03.956309 containerd[1709]: time="2024-12-13T13:33:03.955958147Z" level=info msg="TearDown network for sandbox \"23e471ac3a3f93ff45fac7d008bd5a3552121031429c7a37d8f5380dc12dd443\" successfully" Dec 13 13:33:03.956309 containerd[1709]: time="2024-12-13T13:33:03.956239454Z" level=info msg="StopPodSandbox for \"23e471ac3a3f93ff45fac7d008bd5a3552121031429c7a37d8f5380dc12dd443\" returns successfully" Dec 13 13:33:03.957293 containerd[1709]: time="2024-12-13T13:33:03.957122478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7zsr,Uid:5e38b74a-209a-4cd3-be7c-117000f59938,Namespace:calico-system,Attempt:1,}" Dec 13 13:33:03.956948 systemd[1]: run-netns-cni\x2d183066bf\x2da31f\x2d6278\x2d1d19\x2ddac7f714cfb2.mount: Deactivated successfully. Dec 13 13:33:04.466521 containerd[1709]: time="2024-12-13T13:33:04.466230500Z" level=error msg="Failed to destroy network for sandbox \"54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:04.468703 containerd[1709]: time="2024-12-13T13:33:04.468660964Z" level=error msg="encountered an error cleaning up failed sandbox \"54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:04.469003 containerd[1709]: time="2024-12-13T13:33:04.468970772Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-88gf8,Uid:3098ed4c-c400-4c97-958d-d1930afff8ed,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:04.470079 kubelet[3413]: E1213 13:33:04.470038 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:04.470614 kubelet[3413]: E1213 13:33:04.470230 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-88gf8" Dec 13 13:33:04.470614 kubelet[3413]: E1213 13:33:04.470293 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-88gf8" Dec 13 13:33:04.470614 kubelet[3413]: E1213 13:33:04.470354 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-88gf8_kube-system(3098ed4c-c400-4c97-958d-d1930afff8ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-88gf8_kube-system(3098ed4c-c400-4c97-958d-d1930afff8ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-88gf8" podUID="3098ed4c-c400-4c97-958d-d1930afff8ed" Dec 13 13:33:04.472453 containerd[1709]: time="2024-12-13T13:33:04.472423063Z" level=error msg="Failed to destroy network for sandbox \"a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:04.473451 containerd[1709]: time="2024-12-13T13:33:04.473418389Z" level=error msg="encountered an error cleaning up failed sandbox \"a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:04.473700 containerd[1709]: time="2024-12-13T13:33:04.473584194Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5f78578d-x98kx,Uid:4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:04.473926 kubelet[3413]: E1213 13:33:04.473895 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:04.474519 kubelet[3413]: E1213 13:33:04.474246 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5f78578d-x98kx" Dec 13 13:33:04.474519 kubelet[3413]: E1213 13:33:04.474376 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5f78578d-x98kx" Dec 13 13:33:04.474734 kubelet[3413]: E1213 13:33:04.474436 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c5f78578d-x98kx_calico-apiserver(4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c5f78578d-x98kx_calico-apiserver(4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5f78578d-x98kx" podUID="4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef" Dec 13 13:33:04.516656 containerd[1709]: time="2024-12-13T13:33:04.516601528Z" level=error msg="Failed to destroy network for sandbox \"07baee6b93d4ebddbb437e721964155527bf842caa5b61e8c5b8d83dd75239fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:04.517245 containerd[1709]: time="2024-12-13T13:33:04.517210344Z" level=error msg="encountered an error cleaning up failed sandbox \"07baee6b93d4ebddbb437e721964155527bf842caa5b61e8c5b8d83dd75239fb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:04.517794 containerd[1709]: time="2024-12-13T13:33:04.517521352Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxst4,Uid:d48882ed-a3fb-4cc6-a051-3acab30e260b,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"07baee6b93d4ebddbb437e721964155527bf842caa5b61e8c5b8d83dd75239fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:04.517794 containerd[1709]: time="2024-12-13T13:33:04.517455650Z" level=error msg="Failed to destroy network for sandbox \"97104bf9c307ad7f13a63a015e8e624b4ff70ad421114bf1a3c93bc55db9e874\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:04.518160 containerd[1709]: time="2024-12-13T13:33:04.518131568Z" level=error msg="encountered an error cleaning up failed sandbox \"97104bf9c307ad7f13a63a015e8e624b4ff70ad421114bf1a3c93bc55db9e874\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:04.518292 containerd[1709]: time="2024-12-13T13:33:04.518268972Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7zsr,Uid:5e38b74a-209a-4cd3-be7c-117000f59938,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"97104bf9c307ad7f13a63a015e8e624b4ff70ad421114bf1a3c93bc55db9e874\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:04.519345 kubelet[3413]: E1213 13:33:04.518710 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07baee6b93d4ebddbb437e721964155527bf842caa5b61e8c5b8d83dd75239fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:04.519345 kubelet[3413]: E1213 13:33:04.518804 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97104bf9c307ad7f13a63a015e8e624b4ff70ad421114bf1a3c93bc55db9e874\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:04.519345 kubelet[3413]: E1213 13:33:04.518856 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97104bf9c307ad7f13a63a015e8e624b4ff70ad421114bf1a3c93bc55db9e874\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7zsr" Dec 13 13:33:04.519345 kubelet[3413]: E1213 13:33:04.518884 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97104bf9c307ad7f13a63a015e8e624b4ff70ad421114bf1a3c93bc55db9e874\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7zsr" Dec 13 13:33:04.519560 kubelet[3413]: E1213 13:33:04.518935 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l7zsr_calico-system(5e38b74a-209a-4cd3-be7c-117000f59938)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l7zsr_calico-system(5e38b74a-209a-4cd3-be7c-117000f59938)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97104bf9c307ad7f13a63a015e8e624b4ff70ad421114bf1a3c93bc55db9e874\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l7zsr" podUID="5e38b74a-209a-4cd3-be7c-117000f59938" Dec 13 13:33:04.519560 kubelet[3413]: E1213 13:33:04.519213 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07baee6b93d4ebddbb437e721964155527bf842caa5b61e8c5b8d83dd75239fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nxst4" Dec 13 13:33:04.519560 kubelet[3413]: E1213 13:33:04.519241 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07baee6b93d4ebddbb437e721964155527bf842caa5b61e8c5b8d83dd75239fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nxst4" Dec 13 13:33:04.519719 kubelet[3413]: E1213 13:33:04.519303 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nxst4_kube-system(d48882ed-a3fb-4cc6-a051-3acab30e260b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nxst4_kube-system(d48882ed-a3fb-4cc6-a051-3acab30e260b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"07baee6b93d4ebddbb437e721964155527bf842caa5b61e8c5b8d83dd75239fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nxst4" podUID="d48882ed-a3fb-4cc6-a051-3acab30e260b" Dec 13 13:33:04.548568 containerd[1709]: time="2024-12-13T13:33:04.548503869Z" level=error msg="Failed to destroy network for sandbox \"93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:04.549013 containerd[1709]: time="2024-12-13T13:33:04.548935680Z" level=error msg="encountered an error cleaning up failed sandbox \"93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:04.549144 containerd[1709]: time="2024-12-13T13:33:04.549062584Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5f78578d-dhdkn,Uid:e90719d9-2bf9-4651-a5b8-e332bf6846fe,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:04.550122 kubelet[3413]: E1213 13:33:04.549945 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:04.550122 kubelet[3413]: E1213 13:33:04.550035 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5f78578d-dhdkn" Dec 13 13:33:04.550122 kubelet[3413]: E1213 13:33:04.550073 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5f78578d-dhdkn" Dec 13 13:33:04.551162 kubelet[3413]: E1213 13:33:04.550145 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c5f78578d-dhdkn_calico-apiserver(e90719d9-2bf9-4651-a5b8-e332bf6846fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c5f78578d-dhdkn_calico-apiserver(e90719d9-2bf9-4651-a5b8-e332bf6846fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5f78578d-dhdkn" podUID="e90719d9-2bf9-4651-a5b8-e332bf6846fe" Dec 13 13:33:04.560383 containerd[1709]: time="2024-12-13T13:33:04.560326880Z" level=error msg="Failed to destroy network for sandbox \"704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:04.560968 containerd[1709]: time="2024-12-13T13:33:04.560698790Z" level=error msg="encountered an error cleaning up failed sandbox \"704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:04.560968 containerd[1709]: time="2024-12-13T13:33:04.560797793Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cbfd9d889-rxm9h,Uid:0c845263-e633-4055-81f9-4aa28ad32b74,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:04.561636 kubelet[3413]: E1213 13:33:04.561033 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:04.561636 kubelet[3413]: E1213 13:33:04.561145 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cbfd9d889-rxm9h" Dec 13 13:33:04.561636 kubelet[3413]: E1213 13:33:04.561180 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cbfd9d889-rxm9h" Dec 13 13:33:04.562126 kubelet[3413]: E1213 13:33:04.561250 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5cbfd9d889-rxm9h_calico-system(0c845263-e633-4055-81f9-4aa28ad32b74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5cbfd9d889-rxm9h_calico-system(0c845263-e633-4055-81f9-4aa28ad32b74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5cbfd9d889-rxm9h" podUID="0c845263-e633-4055-81f9-4aa28ad32b74" Dec 13 13:33:04.636421 systemd[1]: run-netns-cni\x2d7f9ddb78\x2d0a87\x2dd023\x2d70ee\x2dd116ebcda5bb.mount: Deactivated successfully. Dec 13 13:33:04.961380 kubelet[3413]: I1213 13:33:04.961323 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d" Dec 13 13:33:04.970352 containerd[1709]: time="2024-12-13T13:33:04.969164859Z" level=info msg="StopPodSandbox for \"a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d\"" Dec 13 13:33:04.972835 containerd[1709]: time="2024-12-13T13:33:04.971591823Z" level=info msg="Ensure that sandbox a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d in task-service has been cleanup successfully" Dec 13 13:33:04.973097 containerd[1709]: time="2024-12-13T13:33:04.973055562Z" level=info msg="TearDown network for sandbox \"a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d\" successfully" Dec 13 13:33:04.973422 containerd[1709]: time="2024-12-13T13:33:04.973180465Z" level=info msg="StopPodSandbox for \"a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d\" returns successfully" Dec 13 13:33:04.976183 containerd[1709]: time="2024-12-13T13:33:04.976154143Z" level=info msg="StopPodSandbox for \"9c9b8726975afc5c77096e93439aa815b4e726aa1d5d7677f13b81a0a9aac4a2\"" Dec 13 13:33:04.976769 containerd[1709]: time="2024-12-13T13:33:04.976365749Z" level=info msg="TearDown network for sandbox \"9c9b8726975afc5c77096e93439aa815b4e726aa1d5d7677f13b81a0a9aac4a2\" successfully" Dec 13 13:33:04.976769 containerd[1709]: time="2024-12-13T13:33:04.976388050Z" level=info msg="StopPodSandbox for \"9c9b8726975afc5c77096e93439aa815b4e726aa1d5d7677f13b81a0a9aac4a2\" returns successfully" Dec 13 13:33:04.980499 systemd[1]: run-netns-cni\x2d73c262e6\x2d14ee\x2daeb6\x2d08e1\x2dd29c395d3e48.mount: Deactivated successfully. Dec 13 13:33:04.981937 containerd[1709]: time="2024-12-13T13:33:04.981243778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5f78578d-x98kx,Uid:4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef,Namespace:calico-apiserver,Attempt:2,}" Dec 13 13:33:05.014952 kubelet[3413]: I1213 13:33:05.014834 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30" Dec 13 13:33:05.018417 containerd[1709]: time="2024-12-13T13:33:05.018378357Z" level=info msg="StopPodSandbox for \"54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30\"" Dec 13 13:33:05.022090 containerd[1709]: time="2024-12-13T13:33:05.020819421Z" level=info msg="Ensure that sandbox 54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30 in task-service has been cleanup successfully" Dec 13 13:33:05.026066 containerd[1709]: time="2024-12-13T13:33:05.025837353Z" level=info msg="TearDown network for sandbox \"54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30\" successfully" Dec 13 13:33:05.026066 containerd[1709]: time="2024-12-13T13:33:05.025866154Z" level=info msg="StopPodSandbox for \"54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30\" returns successfully" Dec 13 13:33:05.028396 kubelet[3413]: I1213 13:33:05.028346 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f" Dec 13 13:33:05.029520 systemd[1]: run-netns-cni\x2d0939c49e\x2d7b7b\x2deef6\x2df086\x2d3cbd76919545.mount: Deactivated successfully. Dec 13 13:33:05.032100 containerd[1709]: time="2024-12-13T13:33:05.030832285Z" level=info msg="StopPodSandbox for \"c9c43a7c6b28804803fc3a5917d419eca74f6d1cb9caefbf976a951c519e132c\"" Dec 13 13:33:05.034301 containerd[1709]: time="2024-12-13T13:33:05.034152973Z" level=info msg="TearDown network for sandbox \"c9c43a7c6b28804803fc3a5917d419eca74f6d1cb9caefbf976a951c519e132c\" successfully" Dec 13 13:33:05.034301 containerd[1709]: time="2024-12-13T13:33:05.034177473Z" level=info msg="StopPodSandbox for \"c9c43a7c6b28804803fc3a5917d419eca74f6d1cb9caefbf976a951c519e132c\" returns successfully" Dec 13 13:33:05.037812 containerd[1709]: time="2024-12-13T13:33:05.037420259Z" level=info msg="StopPodSandbox for \"704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f\"" Dec 13 13:33:05.037812 containerd[1709]: time="2024-12-13T13:33:05.037693166Z" level=info msg="Ensure that sandbox 704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f in task-service has been cleanup successfully" Dec 13 13:33:05.040861 containerd[1709]: time="2024-12-13T13:33:05.040824148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-88gf8,Uid:3098ed4c-c400-4c97-958d-d1930afff8ed,Namespace:kube-system,Attempt:2,}" Dec 13 13:33:05.042305 kubelet[3413]: I1213 13:33:05.042280 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07baee6b93d4ebddbb437e721964155527bf842caa5b61e8c5b8d83dd75239fb" Dec 13 13:33:05.046075 containerd[1709]: time="2024-12-13T13:33:05.040980353Z" level=info msg="TearDown network for sandbox \"704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f\" successfully" Dec 13 13:33:05.045817 systemd[1]: run-netns-cni\x2d97771ec1\x2d9d0e\x2de8e2\x2dd221\x2dd6198342e154.mount: Deactivated successfully. Dec 13 13:33:05.047338 containerd[1709]: time="2024-12-13T13:33:05.045455771Z" level=info msg="StopPodSandbox for \"704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f\" returns successfully" Dec 13 13:33:05.051764 containerd[1709]: time="2024-12-13T13:33:05.049141768Z" level=info msg="StopPodSandbox for \"07baee6b93d4ebddbb437e721964155527bf842caa5b61e8c5b8d83dd75239fb\"" Dec 13 13:33:05.051764 containerd[1709]: time="2024-12-13T13:33:05.049382774Z" level=info msg="Ensure that sandbox 07baee6b93d4ebddbb437e721964155527bf842caa5b61e8c5b8d83dd75239fb in task-service has been cleanup successfully" Dec 13 13:33:05.053146 containerd[1709]: time="2024-12-13T13:33:05.053095872Z" level=info msg="TearDown network for sandbox \"07baee6b93d4ebddbb437e721964155527bf842caa5b61e8c5b8d83dd75239fb\" successfully" Dec 13 13:33:05.053809 containerd[1709]: time="2024-12-13T13:33:05.053784790Z" level=info msg="StopPodSandbox for \"07baee6b93d4ebddbb437e721964155527bf842caa5b61e8c5b8d83dd75239fb\" returns successfully" Dec 13 13:33:05.055130 systemd[1]: run-netns-cni\x2d9992a7f5\x2d6991\x2db4bd\x2dd6cc\x2d88e136e94a5e.mount: Deactivated successfully. Dec 13 13:33:05.056710 containerd[1709]: time="2024-12-13T13:33:05.056632465Z" level=info msg="StopPodSandbox for \"512be05e9151ae9a584b66e39d1ec0802ac268d4aa921c7e1a4e6148d156309a\"" Dec 13 13:33:05.057486 containerd[1709]: time="2024-12-13T13:33:05.057463687Z" level=info msg="TearDown network for sandbox \"512be05e9151ae9a584b66e39d1ec0802ac268d4aa921c7e1a4e6148d156309a\" successfully" Dec 13 13:33:05.057579 containerd[1709]: time="2024-12-13T13:33:05.057562290Z" level=info msg="StopPodSandbox for \"512be05e9151ae9a584b66e39d1ec0802ac268d4aa921c7e1a4e6148d156309a\" returns successfully" Dec 13 13:33:05.058050 containerd[1709]: time="2024-12-13T13:33:05.058030902Z" level=info msg="StopPodSandbox for \"0faa43f2ad8e70510b7c5990679c2810ce6aa6b8b9fc1bbc25e194e37c5b0aaa\"" Dec 13 13:33:05.058230 containerd[1709]: time="2024-12-13T13:33:05.058214607Z" level=info msg="TearDown network for sandbox \"0faa43f2ad8e70510b7c5990679c2810ce6aa6b8b9fc1bbc25e194e37c5b0aaa\" successfully" Dec 13 13:33:05.058334 containerd[1709]: time="2024-12-13T13:33:05.058319910Z" level=info msg="StopPodSandbox for \"0faa43f2ad8e70510b7c5990679c2810ce6aa6b8b9fc1bbc25e194e37c5b0aaa\" returns successfully" Dec 13 13:33:05.059496 containerd[1709]: time="2024-12-13T13:33:05.059467740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cbfd9d889-rxm9h,Uid:0c845263-e633-4055-81f9-4aa28ad32b74,Namespace:calico-system,Attempt:2,}" Dec 13 13:33:05.060371 containerd[1709]: time="2024-12-13T13:33:05.060345063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxst4,Uid:d48882ed-a3fb-4cc6-a051-3acab30e260b,Namespace:kube-system,Attempt:2,}" Dec 13 13:33:05.062318 kubelet[3413]: I1213 13:33:05.062292 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204" Dec 13 13:33:05.065290 containerd[1709]: time="2024-12-13T13:33:05.065242092Z" level=info msg="StopPodSandbox for \"93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204\"" Dec 13 13:33:05.067159 containerd[1709]: time="2024-12-13T13:33:05.067110441Z" level=info msg="Ensure that sandbox 93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204 in task-service has been cleanup successfully" Dec 13 13:33:05.067504 containerd[1709]: time="2024-12-13T13:33:05.067462651Z" level=info msg="TearDown network for sandbox \"93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204\" successfully" Dec 13 13:33:05.067573 containerd[1709]: time="2024-12-13T13:33:05.067505452Z" level=info msg="StopPodSandbox for \"93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204\" returns successfully" Dec 13 13:33:05.072715 containerd[1709]: time="2024-12-13T13:33:05.071942969Z" level=info msg="StopPodSandbox for \"f9184b532475922b9100a5c938652161842880f337124073da61e386b2f345dc\"" Dec 13 13:33:05.072715 containerd[1709]: time="2024-12-13T13:33:05.072055472Z" level=info msg="TearDown network for sandbox \"f9184b532475922b9100a5c938652161842880f337124073da61e386b2f345dc\" successfully" Dec 13 13:33:05.072715 containerd[1709]: time="2024-12-13T13:33:05.072069772Z" level=info msg="StopPodSandbox for \"f9184b532475922b9100a5c938652161842880f337124073da61e386b2f345dc\" returns successfully" Dec 13 13:33:05.074322 containerd[1709]: time="2024-12-13T13:33:05.074296331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5f78578d-dhdkn,Uid:e90719d9-2bf9-4651-a5b8-e332bf6846fe,Namespace:calico-apiserver,Attempt:2,}" Dec 13 13:33:05.076512 kubelet[3413]: I1213 13:33:05.076491 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97104bf9c307ad7f13a63a015e8e624b4ff70ad421114bf1a3c93bc55db9e874" Dec 13 13:33:05.078702 containerd[1709]: time="2024-12-13T13:33:05.078583844Z" level=info msg="StopPodSandbox for \"97104bf9c307ad7f13a63a015e8e624b4ff70ad421114bf1a3c93bc55db9e874\"" Dec 13 13:33:05.079288 containerd[1709]: time="2024-12-13T13:33:05.079243861Z" level=info msg="Ensure that sandbox 97104bf9c307ad7f13a63a015e8e624b4ff70ad421114bf1a3c93bc55db9e874 in task-service has been cleanup successfully" Dec 13 13:33:05.079656 containerd[1709]: time="2024-12-13T13:33:05.079634972Z" level=info msg="TearDown network for sandbox \"97104bf9c307ad7f13a63a015e8e624b4ff70ad421114bf1a3c93bc55db9e874\" successfully" Dec 13 13:33:05.079844 containerd[1709]: time="2024-12-13T13:33:05.079779775Z" level=info msg="StopPodSandbox for \"97104bf9c307ad7f13a63a015e8e624b4ff70ad421114bf1a3c93bc55db9e874\" returns successfully" Dec 13 13:33:05.083319 containerd[1709]: time="2024-12-13T13:33:05.083089663Z" level=info msg="StopPodSandbox for \"23e471ac3a3f93ff45fac7d008bd5a3552121031429c7a37d8f5380dc12dd443\"" Dec 13 13:33:05.083319 containerd[1709]: time="2024-12-13T13:33:05.083207166Z" level=info msg="TearDown network for sandbox \"23e471ac3a3f93ff45fac7d008bd5a3552121031429c7a37d8f5380dc12dd443\" successfully" Dec 13 13:33:05.083319 containerd[1709]: time="2024-12-13T13:33:05.083221266Z" level=info msg="StopPodSandbox for \"23e471ac3a3f93ff45fac7d008bd5a3552121031429c7a37d8f5380dc12dd443\" returns successfully" Dec 13 13:33:05.084794 containerd[1709]: time="2024-12-13T13:33:05.083849483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7zsr,Uid:5e38b74a-209a-4cd3-be7c-117000f59938,Namespace:calico-system,Attempt:2,}" Dec 13 13:33:05.636296 systemd[1]: run-netns-cni\x2d6171ec84\x2d1c85\x2de13c\x2db6d8\x2d53e365d0e865.mount: Deactivated successfully. Dec 13 13:33:05.636410 systemd[1]: run-netns-cni\x2d84e3e443\x2d1999\x2d6b69\x2d076f\x2ddffa46875a1e.mount: Deactivated successfully. Dec 13 13:33:05.704573 containerd[1709]: time="2024-12-13T13:33:05.704523346Z" level=info msg="StopPodSandbox for \"9c9b8726975afc5c77096e93439aa815b4e726aa1d5d7677f13b81a0a9aac4a2\"" Dec 13 13:33:05.705006 containerd[1709]: time="2024-12-13T13:33:05.704648350Z" level=info msg="TearDown network for sandbox \"9c9b8726975afc5c77096e93439aa815b4e726aa1d5d7677f13b81a0a9aac4a2\" successfully" Dec 13 13:33:05.705006 containerd[1709]: time="2024-12-13T13:33:05.704663950Z" level=info msg="StopPodSandbox for \"9c9b8726975afc5c77096e93439aa815b4e726aa1d5d7677f13b81a0a9aac4a2\" returns successfully" Dec 13 13:33:05.705153 containerd[1709]: time="2024-12-13T13:33:05.705127862Z" level=info msg="RemovePodSandbox for \"9c9b8726975afc5c77096e93439aa815b4e726aa1d5d7677f13b81a0a9aac4a2\"" Dec 13 13:33:05.705201 containerd[1709]: time="2024-12-13T13:33:05.705165363Z" level=info msg="Forcibly stopping sandbox \"9c9b8726975afc5c77096e93439aa815b4e726aa1d5d7677f13b81a0a9aac4a2\"" Dec 13 13:33:05.705337 containerd[1709]: time="2024-12-13T13:33:05.705245465Z" level=info msg="TearDown network for sandbox \"9c9b8726975afc5c77096e93439aa815b4e726aa1d5d7677f13b81a0a9aac4a2\" successfully" Dec 13 13:33:13.442402 containerd[1709]: time="2024-12-13T13:33:13.442327674Z" level=error msg="Failed to destroy network for sandbox \"ba1103f40af78d6d15f4673903e9e1a8c1f4422c2a8decd82d4e340b7801de26\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:13.443481 containerd[1709]: time="2024-12-13T13:33:13.443220097Z" level=error msg="encountered an error cleaning up failed sandbox \"ba1103f40af78d6d15f4673903e9e1a8c1f4422c2a8decd82d4e340b7801de26\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:13.443481 containerd[1709]: time="2024-12-13T13:33:13.443324700Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5f78578d-x98kx,Uid:4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"ba1103f40af78d6d15f4673903e9e1a8c1f4422c2a8decd82d4e340b7801de26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:13.444572 kubelet[3413]: E1213 13:33:13.444508 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba1103f40af78d6d15f4673903e9e1a8c1f4422c2a8decd82d4e340b7801de26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:13.446014 kubelet[3413]: E1213 13:33:13.444600 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba1103f40af78d6d15f4673903e9e1a8c1f4422c2a8decd82d4e340b7801de26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5f78578d-x98kx" Dec 13 13:33:13.446014 kubelet[3413]: E1213 13:33:13.444634 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba1103f40af78d6d15f4673903e9e1a8c1f4422c2a8decd82d4e340b7801de26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5f78578d-x98kx" Dec 13 13:33:13.446014 kubelet[3413]: E1213 13:33:13.444693 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c5f78578d-x98kx_calico-apiserver(4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c5f78578d-x98kx_calico-apiserver(4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba1103f40af78d6d15f4673903e9e1a8c1f4422c2a8decd82d4e340b7801de26\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5f78578d-x98kx" podUID="4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef" Dec 13 13:33:13.447404 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ba1103f40af78d6d15f4673903e9e1a8c1f4422c2a8decd82d4e340b7801de26-shm.mount: Deactivated successfully. Dec 13 13:33:13.494223 containerd[1709]: time="2024-12-13T13:33:13.494098415Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9c9b8726975afc5c77096e93439aa815b4e726aa1d5d7677f13b81a0a9aac4a2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:33:13.494223 containerd[1709]: time="2024-12-13T13:33:13.494183917Z" level=info msg="RemovePodSandbox \"9c9b8726975afc5c77096e93439aa815b4e726aa1d5d7677f13b81a0a9aac4a2\" returns successfully" Dec 13 13:33:13.494963 containerd[1709]: time="2024-12-13T13:33:13.494884835Z" level=info msg="StopPodSandbox for \"23e471ac3a3f93ff45fac7d008bd5a3552121031429c7a37d8f5380dc12dd443\"" Dec 13 13:33:13.495096 containerd[1709]: time="2024-12-13T13:33:13.495019439Z" level=info msg="TearDown network for sandbox \"23e471ac3a3f93ff45fac7d008bd5a3552121031429c7a37d8f5380dc12dd443\" successfully" Dec 13 13:33:13.495096 containerd[1709]: time="2024-12-13T13:33:13.495040539Z" level=info msg="StopPodSandbox for \"23e471ac3a3f93ff45fac7d008bd5a3552121031429c7a37d8f5380dc12dd443\" returns successfully" Dec 13 13:33:13.495465 containerd[1709]: time="2024-12-13T13:33:13.495427449Z" level=info msg="RemovePodSandbox for \"23e471ac3a3f93ff45fac7d008bd5a3552121031429c7a37d8f5380dc12dd443\"" Dec 13 13:33:13.495688 containerd[1709]: time="2024-12-13T13:33:13.495465350Z" level=info msg="Forcibly stopping sandbox \"23e471ac3a3f93ff45fac7d008bd5a3552121031429c7a37d8f5380dc12dd443\"" Dec 13 13:33:13.495688 containerd[1709]: time="2024-12-13T13:33:13.495561453Z" level=info msg="TearDown network for sandbox \"23e471ac3a3f93ff45fac7d008bd5a3552121031429c7a37d8f5380dc12dd443\" successfully" Dec 13 13:33:14.099117 kubelet[3413]: I1213 13:33:14.099080 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba1103f40af78d6d15f4673903e9e1a8c1f4422c2a8decd82d4e340b7801de26" Dec 13 13:33:14.100611 containerd[1709]: time="2024-12-13T13:33:14.100058610Z" level=info msg="StopPodSandbox for \"ba1103f40af78d6d15f4673903e9e1a8c1f4422c2a8decd82d4e340b7801de26\"" Dec 13 13:33:14.100611 containerd[1709]: time="2024-12-13T13:33:14.100301417Z" level=info msg="Ensure that sandbox ba1103f40af78d6d15f4673903e9e1a8c1f4422c2a8decd82d4e340b7801de26 in task-service has been cleanup successfully" Dec 13 13:33:14.103236 containerd[1709]: time="2024-12-13T13:33:14.101755054Z" level=info msg="TearDown network for sandbox \"ba1103f40af78d6d15f4673903e9e1a8c1f4422c2a8decd82d4e340b7801de26\" successfully" Dec 13 13:33:14.104114 containerd[1709]: time="2024-12-13T13:33:14.104087815Z" level=info msg="StopPodSandbox for \"ba1103f40af78d6d15f4673903e9e1a8c1f4422c2a8decd82d4e340b7801de26\" returns successfully" Dec 13 13:33:14.105774 containerd[1709]: time="2024-12-13T13:33:14.105646655Z" level=info msg="StopPodSandbox for \"a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d\"" Dec 13 13:33:14.105898 containerd[1709]: time="2024-12-13T13:33:14.105862061Z" level=info msg="TearDown network for sandbox \"a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d\" successfully" Dec 13 13:33:14.106027 containerd[1709]: time="2024-12-13T13:33:14.105960263Z" level=info msg="StopPodSandbox for \"a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d\" returns successfully" Dec 13 13:33:14.106472 systemd[1]: run-netns-cni\x2d411d4991\x2d104e\x2d7c3d\x2d5af0\x2d16305599df4f.mount: Deactivated successfully. Dec 13 13:33:14.107079 containerd[1709]: time="2024-12-13T13:33:14.107050891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5f78578d-x98kx,Uid:4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef,Namespace:calico-apiserver,Attempt:3,}" Dec 13 13:33:14.724315 containerd[1709]: time="2024-12-13T13:33:14.724271178Z" level=error msg="Failed to destroy network for sandbox \"9a58c99c21e5816afadb4c83f1b888827e5712ec38030fec9f24c5e6e005d3e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:14.726459 containerd[1709]: time="2024-12-13T13:33:14.725057199Z" level=error msg="encountered an error cleaning up failed sandbox \"9a58c99c21e5816afadb4c83f1b888827e5712ec38030fec9f24c5e6e005d3e7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:14.726459 containerd[1709]: time="2024-12-13T13:33:14.725155501Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-88gf8,Uid:3098ed4c-c400-4c97-958d-d1930afff8ed,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"9a58c99c21e5816afadb4c83f1b888827e5712ec38030fec9f24c5e6e005d3e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:14.726606 kubelet[3413]: E1213 13:33:14.725401 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a58c99c21e5816afadb4c83f1b888827e5712ec38030fec9f24c5e6e005d3e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:14.726606 kubelet[3413]: E1213 13:33:14.725466 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a58c99c21e5816afadb4c83f1b888827e5712ec38030fec9f24c5e6e005d3e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-88gf8" Dec 13 13:33:14.726606 kubelet[3413]: E1213 13:33:14.725495 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a58c99c21e5816afadb4c83f1b888827e5712ec38030fec9f24c5e6e005d3e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-88gf8" Dec 13 13:33:14.727047 kubelet[3413]: E1213 13:33:14.725549 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-88gf8_kube-system(3098ed4c-c400-4c97-958d-d1930afff8ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-88gf8_kube-system(3098ed4c-c400-4c97-958d-d1930afff8ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a58c99c21e5816afadb4c83f1b888827e5712ec38030fec9f24c5e6e005d3e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-88gf8" podUID="3098ed4c-c400-4c97-958d-d1930afff8ed" Dec 13 13:33:14.852772 containerd[1709]: time="2024-12-13T13:33:14.851934504Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"23e471ac3a3f93ff45fac7d008bd5a3552121031429c7a37d8f5380dc12dd443\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:33:14.852772 containerd[1709]: time="2024-12-13T13:33:14.852008806Z" level=info msg="RemovePodSandbox \"23e471ac3a3f93ff45fac7d008bd5a3552121031429c7a37d8f5380dc12dd443\" returns successfully" Dec 13 13:33:14.853880 containerd[1709]: time="2024-12-13T13:33:14.853335441Z" level=info msg="StopPodSandbox for \"f9184b532475922b9100a5c938652161842880f337124073da61e386b2f345dc\"" Dec 13 13:33:14.853880 containerd[1709]: time="2024-12-13T13:33:14.853451644Z" level=info msg="TearDown network for sandbox \"f9184b532475922b9100a5c938652161842880f337124073da61e386b2f345dc\" successfully" Dec 13 13:33:14.853880 containerd[1709]: time="2024-12-13T13:33:14.853464844Z" level=info msg="StopPodSandbox for \"f9184b532475922b9100a5c938652161842880f337124073da61e386b2f345dc\" returns successfully" Dec 13 13:33:14.856091 containerd[1709]: time="2024-12-13T13:33:14.856062612Z" level=info msg="RemovePodSandbox for \"f9184b532475922b9100a5c938652161842880f337124073da61e386b2f345dc\"" Dec 13 13:33:14.856188 containerd[1709]: time="2024-12-13T13:33:14.856096613Z" level=info msg="Forcibly stopping sandbox \"f9184b532475922b9100a5c938652161842880f337124073da61e386b2f345dc\"" Dec 13 13:33:14.856233 containerd[1709]: time="2024-12-13T13:33:14.856182016Z" level=info msg="TearDown network for sandbox \"f9184b532475922b9100a5c938652161842880f337124073da61e386b2f345dc\" successfully" Dec 13 13:33:15.066578 containerd[1709]: time="2024-12-13T13:33:15.065717330Z" level=error msg="Failed to destroy network for sandbox \"3c271ddd843a8733aaa85d7220f846415b6297df5eecf5171d1f5ecae68a2560\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:15.068538 containerd[1709]: time="2024-12-13T13:33:15.068489403Z" level=error msg="encountered an error cleaning up failed sandbox \"3c271ddd843a8733aaa85d7220f846415b6297df5eecf5171d1f5ecae68a2560\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:15.068652 containerd[1709]: time="2024-12-13T13:33:15.068581006Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cbfd9d889-rxm9h,Uid:0c845263-e633-4055-81f9-4aa28ad32b74,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"3c271ddd843a8733aaa85d7220f846415b6297df5eecf5171d1f5ecae68a2560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:15.069371 kubelet[3413]: E1213 13:33:15.068873 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c271ddd843a8733aaa85d7220f846415b6297df5eecf5171d1f5ecae68a2560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:15.069371 kubelet[3413]: E1213 13:33:15.068951 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c271ddd843a8733aaa85d7220f846415b6297df5eecf5171d1f5ecae68a2560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cbfd9d889-rxm9h" Dec 13 13:33:15.069371 kubelet[3413]: E1213 13:33:15.068981 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c271ddd843a8733aaa85d7220f846415b6297df5eecf5171d1f5ecae68a2560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cbfd9d889-rxm9h" Dec 13 13:33:15.070468 kubelet[3413]: E1213 13:33:15.069034 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5cbfd9d889-rxm9h_calico-system(0c845263-e633-4055-81f9-4aa28ad32b74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5cbfd9d889-rxm9h_calico-system(0c845263-e633-4055-81f9-4aa28ad32b74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c271ddd843a8733aaa85d7220f846415b6297df5eecf5171d1f5ecae68a2560\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5cbfd9d889-rxm9h" podUID="0c845263-e633-4055-81f9-4aa28ad32b74" Dec 13 13:33:15.114912 kubelet[3413]: I1213 13:33:15.114880 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a58c99c21e5816afadb4c83f1b888827e5712ec38030fec9f24c5e6e005d3e7" Dec 13 13:33:15.116828 containerd[1709]: time="2024-12-13T13:33:15.116218959Z" level=info msg="StopPodSandbox for \"9a58c99c21e5816afadb4c83f1b888827e5712ec38030fec9f24c5e6e005d3e7\"" Dec 13 13:33:15.117712 containerd[1709]: time="2024-12-13T13:33:15.117538594Z" level=info msg="Ensure that sandbox 9a58c99c21e5816afadb4c83f1b888827e5712ec38030fec9f24c5e6e005d3e7 in task-service has been cleanup successfully" Dec 13 13:33:15.118769 containerd[1709]: time="2024-12-13T13:33:15.118548221Z" level=info msg="TearDown network for sandbox \"9a58c99c21e5816afadb4c83f1b888827e5712ec38030fec9f24c5e6e005d3e7\" successfully" Dec 13 13:33:15.118769 containerd[1709]: time="2024-12-13T13:33:15.118631823Z" level=info msg="StopPodSandbox for \"9a58c99c21e5816afadb4c83f1b888827e5712ec38030fec9f24c5e6e005d3e7\" returns successfully" Dec 13 13:33:15.121606 containerd[1709]: time="2024-12-13T13:33:15.121577000Z" level=info msg="StopPodSandbox for \"54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30\"" Dec 13 13:33:15.122122 containerd[1709]: time="2024-12-13T13:33:15.122041812Z" level=info msg="TearDown network for sandbox \"54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30\" successfully" Dec 13 13:33:15.122122 containerd[1709]: time="2024-12-13T13:33:15.122063913Z" level=info msg="StopPodSandbox for \"54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30\" returns successfully" Dec 13 13:33:15.123680 kubelet[3413]: I1213 13:33:15.123329 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c271ddd843a8733aaa85d7220f846415b6297df5eecf5171d1f5ecae68a2560" Dec 13 13:33:15.124854 containerd[1709]: time="2024-12-13T13:33:15.124700282Z" level=info msg="StopPodSandbox for \"3c271ddd843a8733aaa85d7220f846415b6297df5eecf5171d1f5ecae68a2560\"" Dec 13 13:33:15.125197 containerd[1709]: time="2024-12-13T13:33:15.124804485Z" level=info msg="StopPodSandbox for \"c9c43a7c6b28804803fc3a5917d419eca74f6d1cb9caefbf976a951c519e132c\"" Dec 13 13:33:15.125389 containerd[1709]: time="2024-12-13T13:33:15.125264897Z" level=info msg="TearDown network for sandbox \"c9c43a7c6b28804803fc3a5917d419eca74f6d1cb9caefbf976a951c519e132c\" successfully" Dec 13 13:33:15.125389 containerd[1709]: time="2024-12-13T13:33:15.125280998Z" level=info msg="StopPodSandbox for \"c9c43a7c6b28804803fc3a5917d419eca74f6d1cb9caefbf976a951c519e132c\" returns successfully" Dec 13 13:33:15.128291 containerd[1709]: time="2024-12-13T13:33:15.128108772Z" level=info msg="Ensure that sandbox 3c271ddd843a8733aaa85d7220f846415b6297df5eecf5171d1f5ecae68a2560 in task-service has been cleanup successfully" Dec 13 13:33:15.128460 containerd[1709]: time="2024-12-13T13:33:15.128440681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-88gf8,Uid:3098ed4c-c400-4c97-958d-d1930afff8ed,Namespace:kube-system,Attempt:3,}" Dec 13 13:33:15.129623 containerd[1709]: time="2024-12-13T13:33:15.129016896Z" level=info msg="TearDown network for sandbox \"3c271ddd843a8733aaa85d7220f846415b6297df5eecf5171d1f5ecae68a2560\" successfully" Dec 13 13:33:15.129623 containerd[1709]: time="2024-12-13T13:33:15.129040797Z" level=info msg="StopPodSandbox for \"3c271ddd843a8733aaa85d7220f846415b6297df5eecf5171d1f5ecae68a2560\" returns successfully" Dec 13 13:33:15.129925 containerd[1709]: time="2024-12-13T13:33:15.129895119Z" level=info msg="StopPodSandbox for \"704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f\"" Dec 13 13:33:15.130037 containerd[1709]: time="2024-12-13T13:33:15.130017822Z" level=info msg="TearDown network for sandbox \"704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f\" successfully" Dec 13 13:33:15.130088 containerd[1709]: time="2024-12-13T13:33:15.130038623Z" level=info msg="StopPodSandbox for \"704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f\" returns successfully" Dec 13 13:33:15.131453 containerd[1709]: time="2024-12-13T13:33:15.131053650Z" level=info msg="StopPodSandbox for \"0faa43f2ad8e70510b7c5990679c2810ce6aa6b8b9fc1bbc25e194e37c5b0aaa\"" Dec 13 13:33:15.131453 containerd[1709]: time="2024-12-13T13:33:15.131146852Z" level=info msg="TearDown network for sandbox \"0faa43f2ad8e70510b7c5990679c2810ce6aa6b8b9fc1bbc25e194e37c5b0aaa\" successfully" Dec 13 13:33:15.131453 containerd[1709]: time="2024-12-13T13:33:15.131161652Z" level=info msg="StopPodSandbox for \"0faa43f2ad8e70510b7c5990679c2810ce6aa6b8b9fc1bbc25e194e37c5b0aaa\" returns successfully" Dec 13 13:33:15.133426 containerd[1709]: time="2024-12-13T13:33:15.132935199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cbfd9d889-rxm9h,Uid:0c845263-e633-4055-81f9-4aa28ad32b74,Namespace:calico-system,Attempt:3,}" Dec 13 13:33:15.150299 containerd[1709]: time="2024-12-13T13:33:15.150124152Z" level=error msg="Failed to destroy network for sandbox \"0caaba6b9e21afd1ab3d7d61cc1a7e6ab2ecaf97d8934be5fa1ddd0f234a5880\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:15.154310 containerd[1709]: time="2024-12-13T13:33:15.154272061Z" level=error msg="encountered an error cleaning up failed sandbox \"0caaba6b9e21afd1ab3d7d61cc1a7e6ab2ecaf97d8934be5fa1ddd0f234a5880\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:15.154385 containerd[1709]: time="2024-12-13T13:33:15.154349363Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5f78578d-dhdkn,Uid:e90719d9-2bf9-4651-a5b8-e332bf6846fe,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"0caaba6b9e21afd1ab3d7d61cc1a7e6ab2ecaf97d8934be5fa1ddd0f234a5880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:15.155437 kubelet[3413]: E1213 13:33:15.154547 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0caaba6b9e21afd1ab3d7d61cc1a7e6ab2ecaf97d8934be5fa1ddd0f234a5880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:15.155437 kubelet[3413]: E1213 13:33:15.154606 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0caaba6b9e21afd1ab3d7d61cc1a7e6ab2ecaf97d8934be5fa1ddd0f234a5880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5f78578d-dhdkn" Dec 13 13:33:15.155437 kubelet[3413]: E1213 13:33:15.154639 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0caaba6b9e21afd1ab3d7d61cc1a7e6ab2ecaf97d8934be5fa1ddd0f234a5880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5f78578d-dhdkn" Dec 13 13:33:15.155632 kubelet[3413]: E1213 13:33:15.154695 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c5f78578d-dhdkn_calico-apiserver(e90719d9-2bf9-4651-a5b8-e332bf6846fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c5f78578d-dhdkn_calico-apiserver(e90719d9-2bf9-4651-a5b8-e332bf6846fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0caaba6b9e21afd1ab3d7d61cc1a7e6ab2ecaf97d8934be5fa1ddd0f234a5880\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5f78578d-dhdkn" podUID="e90719d9-2bf9-4651-a5b8-e332bf6846fe" Dec 13 13:33:15.202185 containerd[1709]: time="2024-12-13T13:33:15.202133020Z" level=error msg="Failed to destroy network for sandbox \"2eecb83c5ad9afe1955da766d7ee59454b23acf7dec99b382c39b37b867da2e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:15.202663 containerd[1709]: time="2024-12-13T13:33:15.202627233Z" level=error msg="encountered an error cleaning up failed sandbox \"2eecb83c5ad9afe1955da766d7ee59454b23acf7dec99b382c39b37b867da2e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:15.202739 containerd[1709]: time="2024-12-13T13:33:15.202703135Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxst4,Uid:d48882ed-a3fb-4cc6-a051-3acab30e260b,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"2eecb83c5ad9afe1955da766d7ee59454b23acf7dec99b382c39b37b867da2e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:15.205793 kubelet[3413]: E1213 13:33:15.204716 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eecb83c5ad9afe1955da766d7ee59454b23acf7dec99b382c39b37b867da2e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:15.205793 kubelet[3413]: E1213 13:33:15.204783 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eecb83c5ad9afe1955da766d7ee59454b23acf7dec99b382c39b37b867da2e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nxst4" Dec 13 13:33:15.205793 kubelet[3413]: E1213 13:33:15.204810 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eecb83c5ad9afe1955da766d7ee59454b23acf7dec99b382c39b37b867da2e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nxst4" Dec 13 13:33:15.206079 kubelet[3413]: E1213 13:33:15.204853 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nxst4_kube-system(d48882ed-a3fb-4cc6-a051-3acab30e260b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nxst4_kube-system(d48882ed-a3fb-4cc6-a051-3acab30e260b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2eecb83c5ad9afe1955da766d7ee59454b23acf7dec99b382c39b37b867da2e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nxst4" podUID="d48882ed-a3fb-4cc6-a051-3acab30e260b" Dec 13 13:33:15.256478 containerd[1709]: time="2024-12-13T13:33:15.256437049Z" level=error msg="Failed to destroy network for sandbox \"a931b2d75904a11f9be88ab0a046dc459cdcf2143c5085ead224e44a863501e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:15.257124 containerd[1709]: time="2024-12-13T13:33:15.257061266Z" level=error msg="encountered an error cleaning up failed sandbox \"a931b2d75904a11f9be88ab0a046dc459cdcf2143c5085ead224e44a863501e0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:15.257306 containerd[1709]: time="2024-12-13T13:33:15.257278072Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7zsr,Uid:5e38b74a-209a-4cd3-be7c-117000f59938,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a931b2d75904a11f9be88ab0a046dc459cdcf2143c5085ead224e44a863501e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:15.257819 kubelet[3413]: E1213 13:33:15.257778 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a931b2d75904a11f9be88ab0a046dc459cdcf2143c5085ead224e44a863501e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:15.258001 kubelet[3413]: E1213 13:33:15.257980 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a931b2d75904a11f9be88ab0a046dc459cdcf2143c5085ead224e44a863501e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7zsr" Dec 13 13:33:15.258191 kubelet[3413]: E1213 13:33:15.258080 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a931b2d75904a11f9be88ab0a046dc459cdcf2143c5085ead224e44a863501e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7zsr" Dec 13 13:33:15.258633 kubelet[3413]: E1213 13:33:15.258572 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l7zsr_calico-system(5e38b74a-209a-4cd3-be7c-117000f59938)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l7zsr_calico-system(5e38b74a-209a-4cd3-be7c-117000f59938)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a931b2d75904a11f9be88ab0a046dc459cdcf2143c5085ead224e44a863501e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l7zsr" podUID="5e38b74a-209a-4cd3-be7c-117000f59938" Dec 13 13:33:15.445181 containerd[1709]: time="2024-12-13T13:33:15.444423797Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f9184b532475922b9100a5c938652161842880f337124073da61e386b2f345dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:33:15.445181 containerd[1709]: time="2024-12-13T13:33:15.444501499Z" level=info msg="RemovePodSandbox \"f9184b532475922b9100a5c938652161842880f337124073da61e386b2f345dc\" returns successfully" Dec 13 13:33:15.445994 containerd[1709]: time="2024-12-13T13:33:15.445963737Z" level=info msg="StopPodSandbox for \"0faa43f2ad8e70510b7c5990679c2810ce6aa6b8b9fc1bbc25e194e37c5b0aaa\"" Dec 13 13:33:15.446214 containerd[1709]: time="2024-12-13T13:33:15.446182343Z" level=info msg="TearDown network for sandbox \"0faa43f2ad8e70510b7c5990679c2810ce6aa6b8b9fc1bbc25e194e37c5b0aaa\" successfully" Dec 13 13:33:15.446292 containerd[1709]: time="2024-12-13T13:33:15.446278046Z" level=info msg="StopPodSandbox for \"0faa43f2ad8e70510b7c5990679c2810ce6aa6b8b9fc1bbc25e194e37c5b0aaa\" returns successfully" Dec 13 13:33:15.447230 containerd[1709]: time="2024-12-13T13:33:15.447205270Z" level=info msg="RemovePodSandbox for \"0faa43f2ad8e70510b7c5990679c2810ce6aa6b8b9fc1bbc25e194e37c5b0aaa\"" Dec 13 13:33:15.447391 containerd[1709]: time="2024-12-13T13:33:15.447371274Z" level=info msg="Forcibly stopping sandbox \"0faa43f2ad8e70510b7c5990679c2810ce6aa6b8b9fc1bbc25e194e37c5b0aaa\"" Dec 13 13:33:15.447594 containerd[1709]: time="2024-12-13T13:33:15.447546479Z" level=info msg="TearDown network for sandbox \"0faa43f2ad8e70510b7c5990679c2810ce6aa6b8b9fc1bbc25e194e37c5b0aaa\" successfully" Dec 13 13:33:15.607937 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2eecb83c5ad9afe1955da766d7ee59454b23acf7dec99b382c39b37b867da2e9-shm.mount: Deactivated successfully. Dec 13 13:33:15.608053 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0caaba6b9e21afd1ab3d7d61cc1a7e6ab2ecaf97d8934be5fa1ddd0f234a5880-shm.mount: Deactivated successfully. Dec 13 13:33:15.608140 systemd[1]: run-netns-cni\x2d8f75273d\x2dffed\x2d256d\x2dda16\x2d74e1dc45743d.mount: Deactivated successfully. Dec 13 13:33:15.608213 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3c271ddd843a8733aaa85d7220f846415b6297df5eecf5171d1f5ecae68a2560-shm.mount: Deactivated successfully. Dec 13 13:33:15.608289 systemd[1]: run-netns-cni\x2d8cf15b30\x2df390\x2d3b9f\x2da5d4\x2d3540b7c3fadc.mount: Deactivated successfully. Dec 13 13:33:15.608360 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9a58c99c21e5816afadb4c83f1b888827e5712ec38030fec9f24c5e6e005d3e7-shm.mount: Deactivated successfully. Dec 13 13:33:15.999939 containerd[1709]: time="2024-12-13T13:33:15.999874115Z" level=error msg="Failed to destroy network for sandbox \"1229e06291e03b9fff5dbdc37a9003f6c3f71b000e4f51460c46248e82bab707\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:16.002503 containerd[1709]: time="2024-12-13T13:33:16.001992571Z" level=error msg="encountered an error cleaning up failed sandbox \"1229e06291e03b9fff5dbdc37a9003f6c3f71b000e4f51460c46248e82bab707\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:16.002503 containerd[1709]: time="2024-12-13T13:33:16.002083773Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5f78578d-x98kx,Uid:4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"1229e06291e03b9fff5dbdc37a9003f6c3f71b000e4f51460c46248e82bab707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:16.003704 kubelet[3413]: E1213 13:33:16.002729 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1229e06291e03b9fff5dbdc37a9003f6c3f71b000e4f51460c46248e82bab707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:16.003704 kubelet[3413]: E1213 13:33:16.002826 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1229e06291e03b9fff5dbdc37a9003f6c3f71b000e4f51460c46248e82bab707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5f78578d-x98kx" Dec 13 13:33:16.003704 kubelet[3413]: E1213 13:33:16.002864 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1229e06291e03b9fff5dbdc37a9003f6c3f71b000e4f51460c46248e82bab707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5f78578d-x98kx" Dec 13 13:33:16.004262 kubelet[3413]: E1213 13:33:16.003793 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c5f78578d-x98kx_calico-apiserver(4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c5f78578d-x98kx_calico-apiserver(4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1229e06291e03b9fff5dbdc37a9003f6c3f71b000e4f51460c46248e82bab707\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5f78578d-x98kx" podUID="4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef" Dec 13 13:33:16.004331 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1229e06291e03b9fff5dbdc37a9003f6c3f71b000e4f51460c46248e82bab707-shm.mount: Deactivated successfully. Dec 13 13:33:16.051770 containerd[1709]: time="2024-12-13T13:33:16.051198366Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0faa43f2ad8e70510b7c5990679c2810ce6aa6b8b9fc1bbc25e194e37c5b0aaa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:33:16.051770 containerd[1709]: time="2024-12-13T13:33:16.051263668Z" level=info msg="RemovePodSandbox \"0faa43f2ad8e70510b7c5990679c2810ce6aa6b8b9fc1bbc25e194e37c5b0aaa\" returns successfully" Dec 13 13:33:16.053254 containerd[1709]: time="2024-12-13T13:33:16.053155317Z" level=info msg="StopPodSandbox for \"c9c43a7c6b28804803fc3a5917d419eca74f6d1cb9caefbf976a951c519e132c\"" Dec 13 13:33:16.053346 containerd[1709]: time="2024-12-13T13:33:16.053277321Z" level=info msg="TearDown network for sandbox \"c9c43a7c6b28804803fc3a5917d419eca74f6d1cb9caefbf976a951c519e132c\" successfully" Dec 13 13:33:16.053346 containerd[1709]: time="2024-12-13T13:33:16.053292321Z" level=info msg="StopPodSandbox for \"c9c43a7c6b28804803fc3a5917d419eca74f6d1cb9caefbf976a951c519e132c\" returns successfully" Dec 13 13:33:16.053698 containerd[1709]: time="2024-12-13T13:33:16.053675431Z" level=info msg="RemovePodSandbox for \"c9c43a7c6b28804803fc3a5917d419eca74f6d1cb9caefbf976a951c519e132c\"" Dec 13 13:33:16.053772 containerd[1709]: time="2024-12-13T13:33:16.053709832Z" level=info msg="Forcibly stopping sandbox \"c9c43a7c6b28804803fc3a5917d419eca74f6d1cb9caefbf976a951c519e132c\"" Dec 13 13:33:16.053922 containerd[1709]: time="2024-12-13T13:33:16.053800334Z" level=info msg="TearDown network for sandbox \"c9c43a7c6b28804803fc3a5917d419eca74f6d1cb9caefbf976a951c519e132c\" successfully" Dec 13 13:33:16.128907 kubelet[3413]: I1213 13:33:16.128874 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2eecb83c5ad9afe1955da766d7ee59454b23acf7dec99b382c39b37b867da2e9" Dec 13 13:33:16.130162 containerd[1709]: time="2024-12-13T13:33:16.130016240Z" level=info msg="StopPodSandbox for \"2eecb83c5ad9afe1955da766d7ee59454b23acf7dec99b382c39b37b867da2e9\"" Dec 13 13:33:16.132766 containerd[1709]: time="2024-12-13T13:33:16.130248546Z" level=info msg="Ensure that sandbox 2eecb83c5ad9afe1955da766d7ee59454b23acf7dec99b382c39b37b867da2e9 in task-service has been cleanup successfully" Dec 13 13:33:16.135298 containerd[1709]: time="2024-12-13T13:33:16.135270779Z" level=info msg="TearDown network for sandbox \"2eecb83c5ad9afe1955da766d7ee59454b23acf7dec99b382c39b37b867da2e9\" successfully" Dec 13 13:33:16.135385 containerd[1709]: time="2024-12-13T13:33:16.135299379Z" level=info msg="StopPodSandbox for \"2eecb83c5ad9afe1955da766d7ee59454b23acf7dec99b382c39b37b867da2e9\" returns successfully" Dec 13 13:33:16.136320 containerd[1709]: time="2024-12-13T13:33:16.135622288Z" level=info msg="StopPodSandbox for \"07baee6b93d4ebddbb437e721964155527bf842caa5b61e8c5b8d83dd75239fb\"" Dec 13 13:33:16.136199 systemd[1]: run-netns-cni\x2de2f5cd86\x2ddc6e\x2d42e6\x2df323\x2d02d072c16fa1.mount: Deactivated successfully. Dec 13 13:33:16.136616 containerd[1709]: time="2024-12-13T13:33:16.136595813Z" level=info msg="TearDown network for sandbox \"07baee6b93d4ebddbb437e721964155527bf842caa5b61e8c5b8d83dd75239fb\" successfully" Dec 13 13:33:16.137049 containerd[1709]: time="2024-12-13T13:33:16.137027525Z" level=info msg="StopPodSandbox for \"07baee6b93d4ebddbb437e721964155527bf842caa5b61e8c5b8d83dd75239fb\" returns successfully" Dec 13 13:33:16.138220 containerd[1709]: time="2024-12-13T13:33:16.138032451Z" level=info msg="StopPodSandbox for \"512be05e9151ae9a584b66e39d1ec0802ac268d4aa921c7e1a4e6148d156309a\"" Dec 13 13:33:16.138220 containerd[1709]: time="2024-12-13T13:33:16.138139254Z" level=info msg="TearDown network for sandbox \"512be05e9151ae9a584b66e39d1ec0802ac268d4aa921c7e1a4e6148d156309a\" successfully" Dec 13 13:33:16.138220 containerd[1709]: time="2024-12-13T13:33:16.138155154Z" level=info msg="StopPodSandbox for \"512be05e9151ae9a584b66e39d1ec0802ac268d4aa921c7e1a4e6148d156309a\" returns successfully" Dec 13 13:33:16.139381 containerd[1709]: time="2024-12-13T13:33:16.139176581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxst4,Uid:d48882ed-a3fb-4cc6-a051-3acab30e260b,Namespace:kube-system,Attempt:3,}" Dec 13 13:33:16.139884 kubelet[3413]: I1213 13:33:16.139841 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0caaba6b9e21afd1ab3d7d61cc1a7e6ab2ecaf97d8934be5fa1ddd0f234a5880" Dec 13 13:33:16.141502 containerd[1709]: time="2024-12-13T13:33:16.141289137Z" level=info msg="StopPodSandbox for \"0caaba6b9e21afd1ab3d7d61cc1a7e6ab2ecaf97d8934be5fa1ddd0f234a5880\"" Dec 13 13:33:16.141809 containerd[1709]: time="2024-12-13T13:33:16.141780950Z" level=info msg="Ensure that sandbox 0caaba6b9e21afd1ab3d7d61cc1a7e6ab2ecaf97d8934be5fa1ddd0f234a5880 in task-service has been cleanup successfully" Dec 13 13:33:16.143520 containerd[1709]: time="2024-12-13T13:33:16.143472494Z" level=info msg="TearDown network for sandbox \"0caaba6b9e21afd1ab3d7d61cc1a7e6ab2ecaf97d8934be5fa1ddd0f234a5880\" successfully" Dec 13 13:33:16.143520 containerd[1709]: time="2024-12-13T13:33:16.143490195Z" level=info msg="StopPodSandbox for \"0caaba6b9e21afd1ab3d7d61cc1a7e6ab2ecaf97d8934be5fa1ddd0f234a5880\" returns successfully" Dec 13 13:33:16.144066 containerd[1709]: time="2024-12-13T13:33:16.143947107Z" level=info msg="StopPodSandbox for \"93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204\"" Dec 13 13:33:16.144066 containerd[1709]: time="2024-12-13T13:33:16.144036309Z" level=info msg="TearDown network for sandbox \"93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204\" successfully" Dec 13 13:33:16.144066 containerd[1709]: time="2024-12-13T13:33:16.144051710Z" level=info msg="StopPodSandbox for \"93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204\" returns successfully" Dec 13 13:33:16.145264 containerd[1709]: time="2024-12-13T13:33:16.144633425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5f78578d-dhdkn,Uid:e90719d9-2bf9-4651-a5b8-e332bf6846fe,Namespace:calico-apiserver,Attempt:3,}" Dec 13 13:33:16.145348 kubelet[3413]: I1213 13:33:16.145188 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a931b2d75904a11f9be88ab0a046dc459cdcf2143c5085ead224e44a863501e0" Dec 13 13:33:16.145876 containerd[1709]: time="2024-12-13T13:33:16.145855257Z" level=info msg="StopPodSandbox for \"a931b2d75904a11f9be88ab0a046dc459cdcf2143c5085ead224e44a863501e0\"" Dec 13 13:33:16.146215 containerd[1709]: time="2024-12-13T13:33:16.146193266Z" level=info msg="Ensure that sandbox a931b2d75904a11f9be88ab0a046dc459cdcf2143c5085ead224e44a863501e0 in task-service has been cleanup successfully" Dec 13 13:33:16.146445 containerd[1709]: time="2024-12-13T13:33:16.146426272Z" level=info msg="TearDown network for sandbox \"a931b2d75904a11f9be88ab0a046dc459cdcf2143c5085ead224e44a863501e0\" successfully" Dec 13 13:33:16.146523 containerd[1709]: time="2024-12-13T13:33:16.146509574Z" level=info msg="StopPodSandbox for \"a931b2d75904a11f9be88ab0a046dc459cdcf2143c5085ead224e44a863501e0\" returns successfully" Dec 13 13:33:16.148004 containerd[1709]: time="2024-12-13T13:33:16.147978313Z" level=info msg="StopPodSandbox for \"97104bf9c307ad7f13a63a015e8e624b4ff70ad421114bf1a3c93bc55db9e874\"" Dec 13 13:33:16.148178 containerd[1709]: time="2024-12-13T13:33:16.148159318Z" level=info msg="TearDown network for sandbox \"97104bf9c307ad7f13a63a015e8e624b4ff70ad421114bf1a3c93bc55db9e874\" successfully" Dec 13 13:33:16.148268 containerd[1709]: time="2024-12-13T13:33:16.148250920Z" level=info msg="StopPodSandbox for \"97104bf9c307ad7f13a63a015e8e624b4ff70ad421114bf1a3c93bc55db9e874\" returns successfully" Dec 13 13:33:16.149527 containerd[1709]: time="2024-12-13T13:33:16.149501353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7zsr,Uid:5e38b74a-209a-4cd3-be7c-117000f59938,Namespace:calico-system,Attempt:3,}" Dec 13 13:33:16.162132 kubelet[3413]: I1213 13:33:16.162113 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1229e06291e03b9fff5dbdc37a9003f6c3f71b000e4f51460c46248e82bab707" Dec 13 13:33:16.163735 containerd[1709]: time="2024-12-13T13:33:16.163698027Z" level=info msg="StopPodSandbox for \"1229e06291e03b9fff5dbdc37a9003f6c3f71b000e4f51460c46248e82bab707\"" Dec 13 13:33:16.165888 containerd[1709]: time="2024-12-13T13:33:16.165758081Z" level=info msg="Ensure that sandbox 1229e06291e03b9fff5dbdc37a9003f6c3f71b000e4f51460c46248e82bab707 in task-service has been cleanup successfully" Dec 13 13:33:16.166095 containerd[1709]: time="2024-12-13T13:33:16.166076089Z" level=info msg="TearDown network for sandbox \"1229e06291e03b9fff5dbdc37a9003f6c3f71b000e4f51460c46248e82bab707\" successfully" Dec 13 13:33:16.166283 containerd[1709]: time="2024-12-13T13:33:16.166265594Z" level=info msg="StopPodSandbox for \"1229e06291e03b9fff5dbdc37a9003f6c3f71b000e4f51460c46248e82bab707\" returns successfully" Dec 13 13:33:16.166923 containerd[1709]: time="2024-12-13T13:33:16.166900711Z" level=info msg="StopPodSandbox for \"ba1103f40af78d6d15f4673903e9e1a8c1f4422c2a8decd82d4e340b7801de26\"" Dec 13 13:33:16.167235 containerd[1709]: time="2024-12-13T13:33:16.167087116Z" level=info msg="TearDown network for sandbox \"ba1103f40af78d6d15f4673903e9e1a8c1f4422c2a8decd82d4e340b7801de26\" successfully" Dec 13 13:33:16.167235 containerd[1709]: time="2024-12-13T13:33:16.167105516Z" level=info msg="StopPodSandbox for \"ba1103f40af78d6d15f4673903e9e1a8c1f4422c2a8decd82d4e340b7801de26\" returns successfully" Dec 13 13:33:16.168095 containerd[1709]: time="2024-12-13T13:33:16.167934938Z" level=info msg="StopPodSandbox for \"a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d\"" Dec 13 13:33:16.168095 containerd[1709]: time="2024-12-13T13:33:16.168024441Z" level=info msg="TearDown network for sandbox \"a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d\" successfully" Dec 13 13:33:16.168095 containerd[1709]: time="2024-12-13T13:33:16.168039041Z" level=info msg="StopPodSandbox for \"a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d\" returns successfully" Dec 13 13:33:16.169624 containerd[1709]: time="2024-12-13T13:33:16.169461978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5f78578d-x98kx,Uid:4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef,Namespace:calico-apiserver,Attempt:4,}" Dec 13 13:33:16.540695 containerd[1709]: time="2024-12-13T13:33:16.540647247Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c9c43a7c6b28804803fc3a5917d419eca74f6d1cb9caefbf976a951c519e132c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:33:16.541684 containerd[1709]: time="2024-12-13T13:33:16.540971956Z" level=info msg="RemovePodSandbox \"c9c43a7c6b28804803fc3a5917d419eca74f6d1cb9caefbf976a951c519e132c\" returns successfully" Dec 13 13:33:16.542012 containerd[1709]: time="2024-12-13T13:33:16.541979782Z" level=info msg="StopPodSandbox for \"512be05e9151ae9a584b66e39d1ec0802ac268d4aa921c7e1a4e6148d156309a\"" Dec 13 13:33:16.542113 containerd[1709]: time="2024-12-13T13:33:16.542091785Z" level=info msg="TearDown network for sandbox \"512be05e9151ae9a584b66e39d1ec0802ac268d4aa921c7e1a4e6148d156309a\" successfully" Dec 13 13:33:16.542175 containerd[1709]: time="2024-12-13T13:33:16.542108886Z" level=info msg="StopPodSandbox for \"512be05e9151ae9a584b66e39d1ec0802ac268d4aa921c7e1a4e6148d156309a\" returns successfully" Dec 13 13:33:16.543812 containerd[1709]: time="2024-12-13T13:33:16.543604225Z" level=info msg="RemovePodSandbox for \"512be05e9151ae9a584b66e39d1ec0802ac268d4aa921c7e1a4e6148d156309a\"" Dec 13 13:33:16.543812 containerd[1709]: time="2024-12-13T13:33:16.543636126Z" level=info msg="Forcibly stopping sandbox \"512be05e9151ae9a584b66e39d1ec0802ac268d4aa921c7e1a4e6148d156309a\"" Dec 13 13:33:16.543812 containerd[1709]: time="2024-12-13T13:33:16.543712128Z" level=info msg="TearDown network for sandbox \"512be05e9151ae9a584b66e39d1ec0802ac268d4aa921c7e1a4e6148d156309a\" successfully" Dec 13 13:33:16.611310 systemd[1]: run-netns-cni\x2d2cc9b225\x2d616e\x2d6c94\x2d5bb8\x2d17cbbda67a8e.mount: Deactivated successfully. Dec 13 13:33:16.611732 systemd[1]: run-netns-cni\x2d5cbbb5ce\x2d0a35\x2d9733\x2d5367\x2d560cff6f2be6.mount: Deactivated successfully. Dec 13 13:33:16.611939 systemd[1]: run-netns-cni\x2de4d8f02c\x2df6dc\x2d58d6\x2de1a4\x2d8dcaf24b7d96.mount: Deactivated successfully. Dec 13 13:33:16.677256 containerd[1709]: time="2024-12-13T13:33:16.677206641Z" level=error msg="Failed to destroy network for sandbox \"196a9918fd8a456ad08be866ebdfde331d9ff4ab4265cf2ee04e76934e892869\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:16.679175 containerd[1709]: time="2024-12-13T13:33:16.677994962Z" level=error msg="encountered an error cleaning up failed sandbox \"196a9918fd8a456ad08be866ebdfde331d9ff4ab4265cf2ee04e76934e892869\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:16.679175 containerd[1709]: time="2024-12-13T13:33:16.678081464Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-88gf8,Uid:3098ed4c-c400-4c97-958d-d1930afff8ed,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"196a9918fd8a456ad08be866ebdfde331d9ff4ab4265cf2ee04e76934e892869\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:16.680927 kubelet[3413]: E1213 13:33:16.679908 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"196a9918fd8a456ad08be866ebdfde331d9ff4ab4265cf2ee04e76934e892869\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:16.680927 kubelet[3413]: E1213 13:33:16.679980 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"196a9918fd8a456ad08be866ebdfde331d9ff4ab4265cf2ee04e76934e892869\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-88gf8" Dec 13 13:33:16.680927 kubelet[3413]: E1213 13:33:16.680012 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"196a9918fd8a456ad08be866ebdfde331d9ff4ab4265cf2ee04e76934e892869\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-88gf8" Dec 13 13:33:16.681167 kubelet[3413]: E1213 13:33:16.680068 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-88gf8_kube-system(3098ed4c-c400-4c97-958d-d1930afff8ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-88gf8_kube-system(3098ed4c-c400-4c97-958d-d1930afff8ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"196a9918fd8a456ad08be866ebdfde331d9ff4ab4265cf2ee04e76934e892869\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-88gf8" podUID="3098ed4c-c400-4c97-958d-d1930afff8ed" Dec 13 13:33:16.683847 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-196a9918fd8a456ad08be866ebdfde331d9ff4ab4265cf2ee04e76934e892869-shm.mount: Deactivated successfully. Dec 13 13:33:17.021320 containerd[1709]: time="2024-12-13T13:33:17.021108192Z" level=error msg="Failed to destroy network for sandbox \"342dd7b03388e0d012a00600b5cc7c16bc0230f19e9343cbd40a74c3c9442a41\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:17.022993 containerd[1709]: time="2024-12-13T13:33:17.021994615Z" level=error msg="encountered an error cleaning up failed sandbox \"342dd7b03388e0d012a00600b5cc7c16bc0230f19e9343cbd40a74c3c9442a41\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:17.022993 containerd[1709]: time="2024-12-13T13:33:17.022088018Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cbfd9d889-rxm9h,Uid:0c845263-e633-4055-81f9-4aa28ad32b74,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"342dd7b03388e0d012a00600b5cc7c16bc0230f19e9343cbd40a74c3c9442a41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:17.024593 kubelet[3413]: E1213 13:33:17.024262 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"342dd7b03388e0d012a00600b5cc7c16bc0230f19e9343cbd40a74c3c9442a41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:17.024593 kubelet[3413]: E1213 13:33:17.024362 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"342dd7b03388e0d012a00600b5cc7c16bc0230f19e9343cbd40a74c3c9442a41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cbfd9d889-rxm9h" Dec 13 13:33:17.024593 kubelet[3413]: E1213 13:33:17.024431 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"342dd7b03388e0d012a00600b5cc7c16bc0230f19e9343cbd40a74c3c9442a41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cbfd9d889-rxm9h" Dec 13 13:33:17.025020 kubelet[3413]: E1213 13:33:17.024536 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5cbfd9d889-rxm9h_calico-system(0c845263-e633-4055-81f9-4aa28ad32b74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5cbfd9d889-rxm9h_calico-system(0c845263-e633-4055-81f9-4aa28ad32b74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"342dd7b03388e0d012a00600b5cc7c16bc0230f19e9343cbd40a74c3c9442a41\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5cbfd9d889-rxm9h" podUID="0c845263-e633-4055-81f9-4aa28ad32b74" Dec 13 13:33:17.026859 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-342dd7b03388e0d012a00600b5cc7c16bc0230f19e9343cbd40a74c3c9442a41-shm.mount: Deactivated successfully. Dec 13 13:33:17.171993 kubelet[3413]: I1213 13:33:17.171820 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="196a9918fd8a456ad08be866ebdfde331d9ff4ab4265cf2ee04e76934e892869" Dec 13 13:33:17.173389 containerd[1709]: time="2024-12-13T13:33:17.173180994Z" level=info msg="StopPodSandbox for \"196a9918fd8a456ad08be866ebdfde331d9ff4ab4265cf2ee04e76934e892869\"" Dec 13 13:33:17.173864 containerd[1709]: time="2024-12-13T13:33:17.173682207Z" level=info msg="Ensure that sandbox 196a9918fd8a456ad08be866ebdfde331d9ff4ab4265cf2ee04e76934e892869 in task-service has been cleanup successfully" Dec 13 13:33:17.174141 containerd[1709]: time="2024-12-13T13:33:17.174118519Z" level=info msg="TearDown network for sandbox \"196a9918fd8a456ad08be866ebdfde331d9ff4ab4265cf2ee04e76934e892869\" successfully" Dec 13 13:33:17.174297 containerd[1709]: time="2024-12-13T13:33:17.174205921Z" level=info msg="StopPodSandbox for \"196a9918fd8a456ad08be866ebdfde331d9ff4ab4265cf2ee04e76934e892869\" returns successfully" Dec 13 13:33:17.177622 containerd[1709]: time="2024-12-13T13:33:17.176925293Z" level=info msg="StopPodSandbox for \"9a58c99c21e5816afadb4c83f1b888827e5712ec38030fec9f24c5e6e005d3e7\"" Dec 13 13:33:17.177622 containerd[1709]: time="2024-12-13T13:33:17.177526508Z" level=info msg="TearDown network for sandbox \"9a58c99c21e5816afadb4c83f1b888827e5712ec38030fec9f24c5e6e005d3e7\" successfully" Dec 13 13:33:17.177622 containerd[1709]: time="2024-12-13T13:33:17.177583210Z" level=info msg="StopPodSandbox for \"9a58c99c21e5816afadb4c83f1b888827e5712ec38030fec9f24c5e6e005d3e7\" returns successfully" Dec 13 13:33:17.180483 containerd[1709]: time="2024-12-13T13:33:17.180114177Z" level=info msg="StopPodSandbox for \"54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30\"" Dec 13 13:33:17.180215 systemd[1]: run-netns-cni\x2de21fbd59\x2d67d6\x2d1fe4\x2dd01c\x2daf353763d824.mount: Deactivated successfully. Dec 13 13:33:17.180656 kubelet[3413]: I1213 13:33:17.180599 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="342dd7b03388e0d012a00600b5cc7c16bc0230f19e9343cbd40a74c3c9442a41" Dec 13 13:33:17.181534 containerd[1709]: time="2024-12-13T13:33:17.181355009Z" level=info msg="StopPodSandbox for \"342dd7b03388e0d012a00600b5cc7c16bc0230f19e9343cbd40a74c3c9442a41\"" Dec 13 13:33:17.182305 containerd[1709]: time="2024-12-13T13:33:17.181717819Z" level=info msg="Ensure that sandbox 342dd7b03388e0d012a00600b5cc7c16bc0230f19e9343cbd40a74c3c9442a41 in task-service has been cleanup successfully" Dec 13 13:33:17.182305 containerd[1709]: time="2024-12-13T13:33:17.181828722Z" level=info msg="TearDown network for sandbox \"54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30\" successfully" Dec 13 13:33:17.182305 containerd[1709]: time="2024-12-13T13:33:17.181843722Z" level=info msg="StopPodSandbox for \"54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30\" returns successfully" Dec 13 13:33:17.185901 containerd[1709]: time="2024-12-13T13:33:17.183758072Z" level=info msg="TearDown network for sandbox \"342dd7b03388e0d012a00600b5cc7c16bc0230f19e9343cbd40a74c3c9442a41\" successfully" Dec 13 13:33:17.185901 containerd[1709]: time="2024-12-13T13:33:17.183783573Z" level=info msg="StopPodSandbox for \"342dd7b03388e0d012a00600b5cc7c16bc0230f19e9343cbd40a74c3c9442a41\" returns successfully" Dec 13 13:33:17.185901 containerd[1709]: time="2024-12-13T13:33:17.185299513Z" level=info msg="StopPodSandbox for \"3c271ddd843a8733aaa85d7220f846415b6297df5eecf5171d1f5ecae68a2560\"" Dec 13 13:33:17.185901 containerd[1709]: time="2024-12-13T13:33:17.185391315Z" level=info msg="TearDown network for sandbox \"3c271ddd843a8733aaa85d7220f846415b6297df5eecf5171d1f5ecae68a2560\" successfully" Dec 13 13:33:17.185901 containerd[1709]: time="2024-12-13T13:33:17.185404916Z" level=info msg="StopPodSandbox for \"3c271ddd843a8733aaa85d7220f846415b6297df5eecf5171d1f5ecae68a2560\" returns successfully" Dec 13 13:33:17.185901 containerd[1709]: time="2024-12-13T13:33:17.185509919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-88gf8,Uid:3098ed4c-c400-4c97-958d-d1930afff8ed,Namespace:kube-system,Attempt:4,}" Dec 13 13:33:17.186958 containerd[1709]: time="2024-12-13T13:33:17.186908255Z" level=info msg="StopPodSandbox for \"704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f\"" Dec 13 13:33:17.187042 containerd[1709]: time="2024-12-13T13:33:17.187023858Z" level=info msg="TearDown network for sandbox \"704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f\" successfully" Dec 13 13:33:17.187087 containerd[1709]: time="2024-12-13T13:33:17.187042459Z" level=info msg="StopPodSandbox for \"704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f\" returns successfully" Dec 13 13:33:17.189267 systemd[1]: run-netns-cni\x2de7b2702d\x2dbb1a\x2d516b\x2de582\x2d764264b8e4b3.mount: Deactivated successfully. Dec 13 13:33:17.190371 containerd[1709]: time="2024-12-13T13:33:17.189629427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cbfd9d889-rxm9h,Uid:0c845263-e633-4055-81f9-4aa28ad32b74,Namespace:calico-system,Attempt:4,}" Dec 13 13:33:17.341292 containerd[1709]: time="2024-12-13T13:33:17.341199516Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"512be05e9151ae9a584b66e39d1ec0802ac268d4aa921c7e1a4e6148d156309a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:33:17.341851 containerd[1709]: time="2024-12-13T13:33:17.341797832Z" level=info msg="RemovePodSandbox \"512be05e9151ae9a584b66e39d1ec0802ac268d4aa921c7e1a4e6148d156309a\" returns successfully" Dec 13 13:33:17.607836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3346906368.mount: Deactivated successfully. Dec 13 13:33:18.438862 containerd[1709]: time="2024-12-13T13:33:18.438808803Z" level=error msg="Failed to destroy network for sandbox \"da1e69f52601258dcc46c0bd5c0c6406a9cf7411da2e2992a4c7d9f1a24435fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:18.439555 containerd[1709]: time="2024-12-13T13:33:18.439187913Z" level=error msg="encountered an error cleaning up failed sandbox \"da1e69f52601258dcc46c0bd5c0c6406a9cf7411da2e2992a4c7d9f1a24435fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:18.439555 containerd[1709]: time="2024-12-13T13:33:18.439268115Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxst4,Uid:d48882ed-a3fb-4cc6-a051-3acab30e260b,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"da1e69f52601258dcc46c0bd5c0c6406a9cf7411da2e2992a4c7d9f1a24435fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:18.439663 kubelet[3413]: E1213 13:33:18.439575 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da1e69f52601258dcc46c0bd5c0c6406a9cf7411da2e2992a4c7d9f1a24435fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:18.440046 kubelet[3413]: E1213 13:33:18.439699 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da1e69f52601258dcc46c0bd5c0c6406a9cf7411da2e2992a4c7d9f1a24435fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nxst4" Dec 13 13:33:18.440046 kubelet[3413]: E1213 13:33:18.439731 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da1e69f52601258dcc46c0bd5c0c6406a9cf7411da2e2992a4c7d9f1a24435fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nxst4" Dec 13 13:33:18.440046 kubelet[3413]: E1213 13:33:18.439858 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nxst4_kube-system(d48882ed-a3fb-4cc6-a051-3acab30e260b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nxst4_kube-system(d48882ed-a3fb-4cc6-a051-3acab30e260b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da1e69f52601258dcc46c0bd5c0c6406a9cf7411da2e2992a4c7d9f1a24435fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nxst4" podUID="d48882ed-a3fb-4cc6-a051-3acab30e260b" Dec 13 13:33:18.585302 containerd[1709]: time="2024-12-13T13:33:18.585244357Z" level=error msg="Failed to destroy network for sandbox \"de13cff2245dc4ed18a8c3f6aefb9f47ba419bb6e2c94459fc2257768b9e93cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:18.585597 containerd[1709]: time="2024-12-13T13:33:18.585564965Z" level=error msg="encountered an error cleaning up failed sandbox \"de13cff2245dc4ed18a8c3f6aefb9f47ba419bb6e2c94459fc2257768b9e93cb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:18.585684 containerd[1709]: time="2024-12-13T13:33:18.585638067Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5f78578d-dhdkn,Uid:e90719d9-2bf9-4651-a5b8-e332bf6846fe,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"de13cff2245dc4ed18a8c3f6aefb9f47ba419bb6e2c94459fc2257768b9e93cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:18.585946 kubelet[3413]: E1213 13:33:18.585907 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de13cff2245dc4ed18a8c3f6aefb9f47ba419bb6e2c94459fc2257768b9e93cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:18.586026 kubelet[3413]: E1213 13:33:18.585973 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de13cff2245dc4ed18a8c3f6aefb9f47ba419bb6e2c94459fc2257768b9e93cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5f78578d-dhdkn" Dec 13 13:33:18.586026 kubelet[3413]: E1213 13:33:18.586002 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de13cff2245dc4ed18a8c3f6aefb9f47ba419bb6e2c94459fc2257768b9e93cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5f78578d-dhdkn" Dec 13 13:33:18.586108 kubelet[3413]: E1213 13:33:18.586061 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c5f78578d-dhdkn_calico-apiserver(e90719d9-2bf9-4651-a5b8-e332bf6846fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c5f78578d-dhdkn_calico-apiserver(e90719d9-2bf9-4651-a5b8-e332bf6846fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de13cff2245dc4ed18a8c3f6aefb9f47ba419bb6e2c94459fc2257768b9e93cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5f78578d-dhdkn" podUID="e90719d9-2bf9-4651-a5b8-e332bf6846fe" Dec 13 13:33:18.608146 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-de13cff2245dc4ed18a8c3f6aefb9f47ba419bb6e2c94459fc2257768b9e93cb-shm.mount: Deactivated successfully. Dec 13 13:33:18.608275 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-da1e69f52601258dcc46c0bd5c0c6406a9cf7411da2e2992a4c7d9f1a24435fe-shm.mount: Deactivated successfully. Dec 13 13:33:18.835793 containerd[1709]: time="2024-12-13T13:33:18.835727349Z" level=error msg="Failed to destroy network for sandbox \"a68eaa09bed8efdfca6f6157424c2428010a477d8ffb0eace0fdccc76c3c8e3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:18.838055 containerd[1709]: time="2024-12-13T13:33:18.838005109Z" level=error msg="encountered an error cleaning up failed sandbox \"a68eaa09bed8efdfca6f6157424c2428010a477d8ffb0eace0fdccc76c3c8e3a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:18.838201 containerd[1709]: time="2024-12-13T13:33:18.838093111Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7zsr,Uid:5e38b74a-209a-4cd3-be7c-117000f59938,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"a68eaa09bed8efdfca6f6157424c2428010a477d8ffb0eace0fdccc76c3c8e3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:18.838387 kubelet[3413]: E1213 13:33:18.838337 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a68eaa09bed8efdfca6f6157424c2428010a477d8ffb0eace0fdccc76c3c8e3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:18.838462 kubelet[3413]: E1213 13:33:18.838420 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a68eaa09bed8efdfca6f6157424c2428010a477d8ffb0eace0fdccc76c3c8e3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7zsr" Dec 13 13:33:18.838462 kubelet[3413]: E1213 13:33:18.838447 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a68eaa09bed8efdfca6f6157424c2428010a477d8ffb0eace0fdccc76c3c8e3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7zsr" Dec 13 13:33:18.838536 kubelet[3413]: E1213 13:33:18.838503 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l7zsr_calico-system(5e38b74a-209a-4cd3-be7c-117000f59938)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l7zsr_calico-system(5e38b74a-209a-4cd3-be7c-117000f59938)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a68eaa09bed8efdfca6f6157424c2428010a477d8ffb0eace0fdccc76c3c8e3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l7zsr" podUID="5e38b74a-209a-4cd3-be7c-117000f59938" Dec 13 13:33:18.840188 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a68eaa09bed8efdfca6f6157424c2428010a477d8ffb0eace0fdccc76c3c8e3a-shm.mount: Deactivated successfully. Dec 13 13:33:18.883341 containerd[1709]: time="2024-12-13T13:33:18.883296401Z" level=error msg="Failed to destroy network for sandbox \"b4fa4166fc19ff13508cef89fc2c7441c7423198ceac24c7de4d881d946e7ac1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:18.883806 containerd[1709]: time="2024-12-13T13:33:18.883639310Z" level=error msg="encountered an error cleaning up failed sandbox \"b4fa4166fc19ff13508cef89fc2c7441c7423198ceac24c7de4d881d946e7ac1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:18.883806 containerd[1709]: time="2024-12-13T13:33:18.883722612Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5f78578d-x98kx,Uid:4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"b4fa4166fc19ff13508cef89fc2c7441c7423198ceac24c7de4d881d946e7ac1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:18.884784 kubelet[3413]: E1213 13:33:18.884634 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4fa4166fc19ff13508cef89fc2c7441c7423198ceac24c7de4d881d946e7ac1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:18.884784 kubelet[3413]: E1213 13:33:18.884702 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4fa4166fc19ff13508cef89fc2c7441c7423198ceac24c7de4d881d946e7ac1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5f78578d-x98kx" Dec 13 13:33:18.884784 kubelet[3413]: E1213 13:33:18.884730 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4fa4166fc19ff13508cef89fc2c7441c7423198ceac24c7de4d881d946e7ac1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5f78578d-x98kx" Dec 13 13:33:18.885140 kubelet[3413]: E1213 13:33:18.885073 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c5f78578d-x98kx_calico-apiserver(4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c5f78578d-x98kx_calico-apiserver(4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4fa4166fc19ff13508cef89fc2c7441c7423198ceac24c7de4d881d946e7ac1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5f78578d-x98kx" podUID="4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef" Dec 13 13:33:18.888378 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b4fa4166fc19ff13508cef89fc2c7441c7423198ceac24c7de4d881d946e7ac1-shm.mount: Deactivated successfully. Dec 13 13:33:19.089735 containerd[1709]: time="2024-12-13T13:33:19.089578430Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:19.152798 containerd[1709]: time="2024-12-13T13:33:19.152722891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 13:33:19.188855 kubelet[3413]: I1213 13:33:19.188807 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4fa4166fc19ff13508cef89fc2c7441c7423198ceac24c7de4d881d946e7ac1" Dec 13 13:33:19.189767 containerd[1709]: time="2024-12-13T13:33:19.189337655Z" level=info msg="StopPodSandbox for \"b4fa4166fc19ff13508cef89fc2c7441c7423198ceac24c7de4d881d946e7ac1\"" Dec 13 13:33:19.189767 containerd[1709]: time="2024-12-13T13:33:19.189564361Z" level=info msg="Ensure that sandbox b4fa4166fc19ff13508cef89fc2c7441c7423198ceac24c7de4d881d946e7ac1 in task-service has been cleanup successfully" Dec 13 13:33:19.191219 containerd[1709]: time="2024-12-13T13:33:19.191190804Z" level=info msg="TearDown network for sandbox \"b4fa4166fc19ff13508cef89fc2c7441c7423198ceac24c7de4d881d946e7ac1\" successfully" Dec 13 13:33:19.191529 containerd[1709]: time="2024-12-13T13:33:19.191393009Z" level=info msg="StopPodSandbox for \"b4fa4166fc19ff13508cef89fc2c7441c7423198ceac24c7de4d881d946e7ac1\" returns successfully" Dec 13 13:33:19.193232 containerd[1709]: time="2024-12-13T13:33:19.193016652Z" level=info msg="StopPodSandbox for \"1229e06291e03b9fff5dbdc37a9003f6c3f71b000e4f51460c46248e82bab707\"" Dec 13 13:33:19.193232 containerd[1709]: time="2024-12-13T13:33:19.193153356Z" level=info msg="TearDown network for sandbox \"1229e06291e03b9fff5dbdc37a9003f6c3f71b000e4f51460c46248e82bab707\" successfully" Dec 13 13:33:19.193232 containerd[1709]: time="2024-12-13T13:33:19.193169856Z" level=info msg="StopPodSandbox for \"1229e06291e03b9fff5dbdc37a9003f6c3f71b000e4f51460c46248e82bab707\" returns successfully" Dec 13 13:33:19.194204 containerd[1709]: time="2024-12-13T13:33:19.194024578Z" level=info msg="StopPodSandbox for \"ba1103f40af78d6d15f4673903e9e1a8c1f4422c2a8decd82d4e340b7801de26\"" Dec 13 13:33:19.194204 containerd[1709]: time="2024-12-13T13:33:19.194119881Z" level=info msg="TearDown network for sandbox \"ba1103f40af78d6d15f4673903e9e1a8c1f4422c2a8decd82d4e340b7801de26\" successfully" Dec 13 13:33:19.194204 containerd[1709]: time="2024-12-13T13:33:19.194135381Z" level=info msg="StopPodSandbox for \"ba1103f40af78d6d15f4673903e9e1a8c1f4422c2a8decd82d4e340b7801de26\" returns successfully" Dec 13 13:33:19.194595 containerd[1709]: time="2024-12-13T13:33:19.194571893Z" level=info msg="StopPodSandbox for \"a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d\"" Dec 13 13:33:19.194797 containerd[1709]: time="2024-12-13T13:33:19.194776798Z" level=info msg="TearDown network for sandbox \"a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d\" successfully" Dec 13 13:33:19.194903 containerd[1709]: time="2024-12-13T13:33:19.194887601Z" level=info msg="StopPodSandbox for \"a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d\" returns successfully" Dec 13 13:33:19.196555 containerd[1709]: time="2024-12-13T13:33:19.196530644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5f78578d-x98kx,Uid:4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef,Namespace:calico-apiserver,Attempt:5,}" Dec 13 13:33:19.197215 kubelet[3413]: I1213 13:33:19.197133 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da1e69f52601258dcc46c0bd5c0c6406a9cf7411da2e2992a4c7d9f1a24435fe" Dec 13 13:33:19.198803 containerd[1709]: time="2024-12-13T13:33:19.197868480Z" level=info msg="StopPodSandbox for \"da1e69f52601258dcc46c0bd5c0c6406a9cf7411da2e2992a4c7d9f1a24435fe\"" Dec 13 13:33:19.200130 containerd[1709]: time="2024-12-13T13:33:19.199199815Z" level=info msg="Ensure that sandbox da1e69f52601258dcc46c0bd5c0c6406a9cf7411da2e2992a4c7d9f1a24435fe in task-service has been cleanup successfully" Dec 13 13:33:19.200211 kubelet[3413]: I1213 13:33:19.199999 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de13cff2245dc4ed18a8c3f6aefb9f47ba419bb6e2c94459fc2257768b9e93cb" Dec 13 13:33:19.201065 containerd[1709]: time="2024-12-13T13:33:19.200793657Z" level=info msg="StopPodSandbox for \"de13cff2245dc4ed18a8c3f6aefb9f47ba419bb6e2c94459fc2257768b9e93cb\"" Dec 13 13:33:19.201490 containerd[1709]: time="2024-12-13T13:33:19.201300170Z" level=info msg="TearDown network for sandbox \"da1e69f52601258dcc46c0bd5c0c6406a9cf7411da2e2992a4c7d9f1a24435fe\" successfully" Dec 13 13:33:19.201490 containerd[1709]: time="2024-12-13T13:33:19.201321370Z" level=info msg="StopPodSandbox for \"da1e69f52601258dcc46c0bd5c0c6406a9cf7411da2e2992a4c7d9f1a24435fe\" returns successfully" Dec 13 13:33:19.202253 containerd[1709]: time="2024-12-13T13:33:19.201681780Z" level=info msg="Ensure that sandbox de13cff2245dc4ed18a8c3f6aefb9f47ba419bb6e2c94459fc2257768b9e93cb in task-service has been cleanup successfully" Dec 13 13:33:19.202253 containerd[1709]: time="2024-12-13T13:33:19.202106791Z" level=info msg="StopPodSandbox for \"2eecb83c5ad9afe1955da766d7ee59454b23acf7dec99b382c39b37b867da2e9\"" Dec 13 13:33:19.203066 kubelet[3413]: I1213 13:33:19.202646 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a68eaa09bed8efdfca6f6157424c2428010a477d8ffb0eace0fdccc76c3c8e3a" Dec 13 13:33:19.204029 containerd[1709]: time="2024-12-13T13:33:19.202815010Z" level=info msg="TearDown network for sandbox \"2eecb83c5ad9afe1955da766d7ee59454b23acf7dec99b382c39b37b867da2e9\" successfully" Dec 13 13:33:19.204029 containerd[1709]: time="2024-12-13T13:33:19.202835310Z" level=info msg="StopPodSandbox for \"2eecb83c5ad9afe1955da766d7ee59454b23acf7dec99b382c39b37b867da2e9\" returns successfully" Dec 13 13:33:19.204029 containerd[1709]: time="2024-12-13T13:33:19.203626331Z" level=info msg="StopPodSandbox for \"a68eaa09bed8efdfca6f6157424c2428010a477d8ffb0eace0fdccc76c3c8e3a\"" Dec 13 13:33:19.204029 containerd[1709]: time="2024-12-13T13:33:19.203724034Z" level=info msg="TearDown network for sandbox \"de13cff2245dc4ed18a8c3f6aefb9f47ba419bb6e2c94459fc2257768b9e93cb\" successfully" Dec 13 13:33:19.204029 containerd[1709]: time="2024-12-13T13:33:19.203844337Z" level=info msg="Ensure that sandbox a68eaa09bed8efdfca6f6157424c2428010a477d8ffb0eace0fdccc76c3c8e3a in task-service has been cleanup successfully" Dec 13 13:33:19.204514 containerd[1709]: time="2024-12-13T13:33:19.204481654Z" level=info msg="StopPodSandbox for \"07baee6b93d4ebddbb437e721964155527bf842caa5b61e8c5b8d83dd75239fb\"" Dec 13 13:33:19.204794 containerd[1709]: time="2024-12-13T13:33:19.203741434Z" level=info msg="StopPodSandbox for \"de13cff2245dc4ed18a8c3f6aefb9f47ba419bb6e2c94459fc2257768b9e93cb\" returns successfully" Dec 13 13:33:19.204875 containerd[1709]: time="2024-12-13T13:33:19.204851463Z" level=info msg="TearDown network for sandbox \"a68eaa09bed8efdfca6f6157424c2428010a477d8ffb0eace0fdccc76c3c8e3a\" successfully" Dec 13 13:33:19.204875 containerd[1709]: time="2024-12-13T13:33:19.204867064Z" level=info msg="StopPodSandbox for \"a68eaa09bed8efdfca6f6157424c2428010a477d8ffb0eace0fdccc76c3c8e3a\" returns successfully" Dec 13 13:33:19.205005 containerd[1709]: time="2024-12-13T13:33:19.204986867Z" level=info msg="TearDown network for sandbox \"07baee6b93d4ebddbb437e721964155527bf842caa5b61e8c5b8d83dd75239fb\" successfully" Dec 13 13:33:19.205144 containerd[1709]: time="2024-12-13T13:33:19.205084470Z" level=info msg="StopPodSandbox for \"07baee6b93d4ebddbb437e721964155527bf842caa5b61e8c5b8d83dd75239fb\" returns successfully" Dec 13 13:33:19.205204 containerd[1709]: time="2024-12-13T13:33:19.205165572Z" level=info msg="StopPodSandbox for \"0caaba6b9e21afd1ab3d7d61cc1a7e6ab2ecaf97d8934be5fa1ddd0f234a5880\"" Dec 13 13:33:19.205336 containerd[1709]: time="2024-12-13T13:33:19.205245274Z" level=info msg="TearDown network for sandbox \"0caaba6b9e21afd1ab3d7d61cc1a7e6ab2ecaf97d8934be5fa1ddd0f234a5880\" successfully" Dec 13 13:33:19.205336 containerd[1709]: time="2024-12-13T13:33:19.205261574Z" level=info msg="StopPodSandbox for \"0caaba6b9e21afd1ab3d7d61cc1a7e6ab2ecaf97d8934be5fa1ddd0f234a5880\" returns successfully" Dec 13 13:33:19.205336 containerd[1709]: time="2024-12-13T13:33:19.205315776Z" level=info msg="StopPodSandbox for \"a931b2d75904a11f9be88ab0a046dc459cdcf2143c5085ead224e44a863501e0\"" Dec 13 13:33:19.205464 containerd[1709]: time="2024-12-13T13:33:19.205388878Z" level=info msg="TearDown network for sandbox \"a931b2d75904a11f9be88ab0a046dc459cdcf2143c5085ead224e44a863501e0\" successfully" Dec 13 13:33:19.205464 containerd[1709]: time="2024-12-13T13:33:19.205401778Z" level=info msg="StopPodSandbox for \"a931b2d75904a11f9be88ab0a046dc459cdcf2143c5085ead224e44a863501e0\" returns successfully" Dec 13 13:33:19.206672 containerd[1709]: time="2024-12-13T13:33:19.206528108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxst4,Uid:d48882ed-a3fb-4cc6-a051-3acab30e260b,Namespace:kube-system,Attempt:4,}" Dec 13 13:33:19.206928 containerd[1709]: time="2024-12-13T13:33:19.206862416Z" level=info msg="StopPodSandbox for \"97104bf9c307ad7f13a63a015e8e624b4ff70ad421114bf1a3c93bc55db9e874\"" Dec 13 13:33:19.206991 containerd[1709]: time="2024-12-13T13:33:19.206951019Z" level=info msg="TearDown network for sandbox \"97104bf9c307ad7f13a63a015e8e624b4ff70ad421114bf1a3c93bc55db9e874\" successfully" Dec 13 13:33:19.206991 containerd[1709]: time="2024-12-13T13:33:19.206965919Z" level=info msg="StopPodSandbox for \"97104bf9c307ad7f13a63a015e8e624b4ff70ad421114bf1a3c93bc55db9e874\" returns successfully" Dec 13 13:33:19.207088 containerd[1709]: time="2024-12-13T13:33:19.207029121Z" level=info msg="StopPodSandbox for \"93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204\"" Dec 13 13:33:19.207132 containerd[1709]: time="2024-12-13T13:33:19.207101523Z" level=info msg="TearDown network for sandbox \"93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204\" successfully" Dec 13 13:33:19.207132 containerd[1709]: time="2024-12-13T13:33:19.207114123Z" level=info msg="StopPodSandbox for \"93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204\" returns successfully" Dec 13 13:33:19.208075 containerd[1709]: time="2024-12-13T13:33:19.207766940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5f78578d-dhdkn,Uid:e90719d9-2bf9-4651-a5b8-e332bf6846fe,Namespace:calico-apiserver,Attempt:4,}" Dec 13 13:33:19.208676 containerd[1709]: time="2024-12-13T13:33:19.208584262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7zsr,Uid:5e38b74a-209a-4cd3-be7c-117000f59938,Namespace:calico-system,Attempt:4,}" Dec 13 13:33:19.254921 containerd[1709]: time="2024-12-13T13:33:19.254873380Z" level=error msg="Failed to destroy network for sandbox \"eb5f89087b2211df22d16b4b974229ed850c857511d99558c5d39503dd61034d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:19.255405 containerd[1709]: time="2024-12-13T13:33:19.255289291Z" level=error msg="encountered an error cleaning up failed sandbox \"eb5f89087b2211df22d16b4b974229ed850c857511d99558c5d39503dd61034d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:19.255405 containerd[1709]: time="2024-12-13T13:33:19.255379993Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-88gf8,Uid:3098ed4c-c400-4c97-958d-d1930afff8ed,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"eb5f89087b2211df22d16b4b974229ed850c857511d99558c5d39503dd61034d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:19.255700 kubelet[3413]: E1213 13:33:19.255651 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb5f89087b2211df22d16b4b974229ed850c857511d99558c5d39503dd61034d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:19.255823 kubelet[3413]: E1213 13:33:19.255717 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb5f89087b2211df22d16b4b974229ed850c857511d99558c5d39503dd61034d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-88gf8" Dec 13 13:33:19.255823 kubelet[3413]: E1213 13:33:19.255762 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb5f89087b2211df22d16b4b974229ed850c857511d99558c5d39503dd61034d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-88gf8" Dec 13 13:33:19.256211 kubelet[3413]: E1213 13:33:19.255812 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-88gf8_kube-system(3098ed4c-c400-4c97-958d-d1930afff8ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-88gf8_kube-system(3098ed4c-c400-4c97-958d-d1930afff8ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb5f89087b2211df22d16b4b974229ed850c857511d99558c5d39503dd61034d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-88gf8" podUID="3098ed4c-c400-4c97-958d-d1930afff8ed" Dec 13 13:33:19.289189 containerd[1709]: time="2024-12-13T13:33:19.289143882Z" level=error msg="Failed to destroy network for sandbox \"3c3a846eabfd72932183004cbf7396154aab7eaf2a31f08fdd56c21c6cf36267\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:19.289461 containerd[1709]: time="2024-12-13T13:33:19.289429289Z" level=error msg="encountered an error cleaning up failed sandbox \"3c3a846eabfd72932183004cbf7396154aab7eaf2a31f08fdd56c21c6cf36267\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:19.289539 containerd[1709]: time="2024-12-13T13:33:19.289510691Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cbfd9d889-rxm9h,Uid:0c845263-e633-4055-81f9-4aa28ad32b74,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"3c3a846eabfd72932183004cbf7396154aab7eaf2a31f08fdd56c21c6cf36267\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:19.289788 kubelet[3413]: E1213 13:33:19.289735 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c3a846eabfd72932183004cbf7396154aab7eaf2a31f08fdd56c21c6cf36267\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:19.289910 kubelet[3413]: E1213 13:33:19.289811 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c3a846eabfd72932183004cbf7396154aab7eaf2a31f08fdd56c21c6cf36267\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cbfd9d889-rxm9h" Dec 13 13:33:19.289910 kubelet[3413]: E1213 13:33:19.289837 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c3a846eabfd72932183004cbf7396154aab7eaf2a31f08fdd56c21c6cf36267\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cbfd9d889-rxm9h" Dec 13 13:33:19.289910 kubelet[3413]: E1213 13:33:19.289892 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5cbfd9d889-rxm9h_calico-system(0c845263-e633-4055-81f9-4aa28ad32b74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5cbfd9d889-rxm9h_calico-system(0c845263-e633-4055-81f9-4aa28ad32b74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c3a846eabfd72932183004cbf7396154aab7eaf2a31f08fdd56c21c6cf36267\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5cbfd9d889-rxm9h" podUID="0c845263-e633-4055-81f9-4aa28ad32b74" Dec 13 13:33:19.292318 containerd[1709]: time="2024-12-13T13:33:19.292279664Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:19.605188 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3c3a846eabfd72932183004cbf7396154aab7eaf2a31f08fdd56c21c6cf36267-shm.mount: Deactivated successfully. Dec 13 13:33:19.605317 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eb5f89087b2211df22d16b4b974229ed850c857511d99558c5d39503dd61034d-shm.mount: Deactivated successfully. Dec 13 13:33:19.605395 systemd[1]: run-netns-cni\x2d274831ff\x2df724\x2dd179\x2d728e\x2d1237dd12b4f9.mount: Deactivated successfully. Dec 13 13:33:19.605477 systemd[1]: run-netns-cni\x2dfc697a0d\x2d0130\x2d57a9\x2d3c4e\x2d7ef55747ebc5.mount: Deactivated successfully. Dec 13 13:33:19.605553 systemd[1]: run-netns-cni\x2dc2d151a9\x2dbf50\x2dcc0c\x2d63e5\x2dc2900b414405.mount: Deactivated successfully. Dec 13 13:33:19.605629 systemd[1]: run-netns-cni\x2d8c235b58\x2d97b9\x2df24e\x2da9aa\x2de7d982ea8d88.mount: Deactivated successfully. Dec 13 13:33:19.840662 containerd[1709]: time="2024-12-13T13:33:19.840569194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:19.842473 containerd[1709]: time="2024-12-13T13:33:19.841807227Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 16.916974364s" Dec 13 13:33:19.842473 containerd[1709]: time="2024-12-13T13:33:19.841857228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 13:33:19.861262 containerd[1709]: time="2024-12-13T13:33:19.861035133Z" level=info msg="CreateContainer within sandbox \"71bb6a202b3ea3dfecad0e4e7900674bbbab9454d27304ecdd8a01002789e33f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 13:33:20.207099 kubelet[3413]: I1213 13:33:20.206889 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb5f89087b2211df22d16b4b974229ed850c857511d99558c5d39503dd61034d" Dec 13 13:33:20.208264 containerd[1709]: time="2024-12-13T13:33:20.207856460Z" level=info msg="StopPodSandbox for \"eb5f89087b2211df22d16b4b974229ed850c857511d99558c5d39503dd61034d\"" Dec 13 13:33:20.208264 containerd[1709]: time="2024-12-13T13:33:20.208094667Z" level=info msg="Ensure that sandbox eb5f89087b2211df22d16b4b974229ed850c857511d99558c5d39503dd61034d in task-service has been cleanup successfully" Dec 13 13:33:20.210508 containerd[1709]: time="2024-12-13T13:33:20.210263024Z" level=info msg="TearDown network for sandbox \"eb5f89087b2211df22d16b4b974229ed850c857511d99558c5d39503dd61034d\" successfully" Dec 13 13:33:20.210508 containerd[1709]: time="2024-12-13T13:33:20.210298925Z" level=info msg="StopPodSandbox for \"eb5f89087b2211df22d16b4b974229ed850c857511d99558c5d39503dd61034d\" returns successfully" Dec 13 13:33:20.211531 containerd[1709]: time="2024-12-13T13:33:20.211460355Z" level=info msg="StopPodSandbox for \"196a9918fd8a456ad08be866ebdfde331d9ff4ab4265cf2ee04e76934e892869\"" Dec 13 13:33:20.213358 systemd[1]: run-netns-cni\x2d000461a0\x2d424c\x2de0b3\x2d738c\x2d603d9d293359.mount: Deactivated successfully. Dec 13 13:33:20.213884 containerd[1709]: time="2024-12-13T13:33:20.213498109Z" level=info msg="TearDown network for sandbox \"196a9918fd8a456ad08be866ebdfde331d9ff4ab4265cf2ee04e76934e892869\" successfully" Dec 13 13:33:20.213884 containerd[1709]: time="2024-12-13T13:33:20.213518909Z" level=info msg="StopPodSandbox for \"196a9918fd8a456ad08be866ebdfde331d9ff4ab4265cf2ee04e76934e892869\" returns successfully" Dec 13 13:33:20.214910 containerd[1709]: time="2024-12-13T13:33:20.214700340Z" level=info msg="StopPodSandbox for \"9a58c99c21e5816afadb4c83f1b888827e5712ec38030fec9f24c5e6e005d3e7\"" Dec 13 13:33:20.214910 containerd[1709]: time="2024-12-13T13:33:20.214814243Z" level=info msg="TearDown network for sandbox \"9a58c99c21e5816afadb4c83f1b888827e5712ec38030fec9f24c5e6e005d3e7\" successfully" Dec 13 13:33:20.214910 containerd[1709]: time="2024-12-13T13:33:20.214829444Z" level=info msg="StopPodSandbox for \"9a58c99c21e5816afadb4c83f1b888827e5712ec38030fec9f24c5e6e005d3e7\" returns successfully" Dec 13 13:33:20.215098 kubelet[3413]: I1213 13:33:20.214966 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c3a846eabfd72932183004cbf7396154aab7eaf2a31f08fdd56c21c6cf36267" Dec 13 13:33:20.217785 containerd[1709]: time="2024-12-13T13:33:20.215431860Z" level=info msg="StopPodSandbox for \"3c3a846eabfd72932183004cbf7396154aab7eaf2a31f08fdd56c21c6cf36267\"" Dec 13 13:33:20.217785 containerd[1709]: time="2024-12-13T13:33:20.215635465Z" level=info msg="Ensure that sandbox 3c3a846eabfd72932183004cbf7396154aab7eaf2a31f08fdd56c21c6cf36267 in task-service has been cleanup successfully" Dec 13 13:33:20.218945 containerd[1709]: time="2024-12-13T13:33:20.218656545Z" level=info msg="StopPodSandbox for \"54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30\"" Dec 13 13:33:20.218945 containerd[1709]: time="2024-12-13T13:33:20.218764947Z" level=info msg="TearDown network for sandbox \"54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30\" successfully" Dec 13 13:33:20.218945 containerd[1709]: time="2024-12-13T13:33:20.218779048Z" level=info msg="StopPodSandbox for \"54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30\" returns successfully" Dec 13 13:33:20.219909 containerd[1709]: time="2024-12-13T13:33:20.219085156Z" level=info msg="TearDown network for sandbox \"3c3a846eabfd72932183004cbf7396154aab7eaf2a31f08fdd56c21c6cf36267\" successfully" Dec 13 13:33:20.220067 containerd[1709]: time="2024-12-13T13:33:20.219988580Z" level=info msg="StopPodSandbox for \"3c3a846eabfd72932183004cbf7396154aab7eaf2a31f08fdd56c21c6cf36267\" returns successfully" Dec 13 13:33:20.220125 containerd[1709]: time="2024-12-13T13:33:20.219327962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-88gf8,Uid:3098ed4c-c400-4c97-958d-d1930afff8ed,Namespace:kube-system,Attempt:5,}" Dec 13 13:33:20.221347 containerd[1709]: time="2024-12-13T13:33:20.221157010Z" level=info msg="StopPodSandbox for \"342dd7b03388e0d012a00600b5cc7c16bc0230f19e9343cbd40a74c3c9442a41\"" Dec 13 13:33:20.221347 containerd[1709]: time="2024-12-13T13:33:20.221295814Z" level=info msg="TearDown network for sandbox \"342dd7b03388e0d012a00600b5cc7c16bc0230f19e9343cbd40a74c3c9442a41\" successfully" Dec 13 13:33:20.221347 containerd[1709]: time="2024-12-13T13:33:20.221310714Z" level=info msg="StopPodSandbox for \"342dd7b03388e0d012a00600b5cc7c16bc0230f19e9343cbd40a74c3c9442a41\" returns successfully" Dec 13 13:33:20.222162 systemd[1]: run-netns-cni\x2d1a6b5da3\x2dbde3\x2d061d\x2d0c86\x2d0b2cc30a1ba2.mount: Deactivated successfully. Dec 13 13:33:20.222273 containerd[1709]: time="2024-12-13T13:33:20.222212738Z" level=info msg="StopPodSandbox for \"3c271ddd843a8733aaa85d7220f846415b6297df5eecf5171d1f5ecae68a2560\"" Dec 13 13:33:20.222539 containerd[1709]: time="2024-12-13T13:33:20.222285940Z" level=info msg="TearDown network for sandbox \"3c271ddd843a8733aaa85d7220f846415b6297df5eecf5171d1f5ecae68a2560\" successfully" Dec 13 13:33:20.222539 containerd[1709]: time="2024-12-13T13:33:20.222299640Z" level=info msg="StopPodSandbox for \"3c271ddd843a8733aaa85d7220f846415b6297df5eecf5171d1f5ecae68a2560\" returns successfully" Dec 13 13:33:20.222644 containerd[1709]: time="2024-12-13T13:33:20.222589648Z" level=info msg="StopPodSandbox for \"704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f\"" Dec 13 13:33:20.222685 containerd[1709]: time="2024-12-13T13:33:20.222669550Z" level=info msg="TearDown network for sandbox \"704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f\" successfully" Dec 13 13:33:20.222722 containerd[1709]: time="2024-12-13T13:33:20.222682551Z" level=info msg="StopPodSandbox for \"704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f\" returns successfully" Dec 13 13:33:20.223113 containerd[1709]: time="2024-12-13T13:33:20.223052960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cbfd9d889-rxm9h,Uid:0c845263-e633-4055-81f9-4aa28ad32b74,Namespace:calico-system,Attempt:5,}" Dec 13 13:33:21.086306 containerd[1709]: time="2024-12-13T13:33:21.086258578Z" level=error msg="Failed to destroy network for sandbox \"05e34531ee02562434315e5486898af33952f14e9aad78adf66cb7fc1e3fc784\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:21.086899 containerd[1709]: time="2024-12-13T13:33:21.086656889Z" level=error msg="encountered an error cleaning up failed sandbox \"05e34531ee02562434315e5486898af33952f14e9aad78adf66cb7fc1e3fc784\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:21.086899 containerd[1709]: time="2024-12-13T13:33:21.086761691Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5f78578d-dhdkn,Uid:e90719d9-2bf9-4651-a5b8-e332bf6846fe,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"05e34531ee02562434315e5486898af33952f14e9aad78adf66cb7fc1e3fc784\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:21.087121 kubelet[3413]: E1213 13:33:21.087073 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05e34531ee02562434315e5486898af33952f14e9aad78adf66cb7fc1e3fc784\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:21.087190 kubelet[3413]: E1213 13:33:21.087144 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05e34531ee02562434315e5486898af33952f14e9aad78adf66cb7fc1e3fc784\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5f78578d-dhdkn" Dec 13 13:33:21.087190 kubelet[3413]: E1213 13:33:21.087174 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05e34531ee02562434315e5486898af33952f14e9aad78adf66cb7fc1e3fc784\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5f78578d-dhdkn" Dec 13 13:33:21.087278 kubelet[3413]: E1213 13:33:21.087237 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c5f78578d-dhdkn_calico-apiserver(e90719d9-2bf9-4651-a5b8-e332bf6846fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c5f78578d-dhdkn_calico-apiserver(e90719d9-2bf9-4651-a5b8-e332bf6846fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05e34531ee02562434315e5486898af33952f14e9aad78adf66cb7fc1e3fc784\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5f78578d-dhdkn" podUID="e90719d9-2bf9-4651-a5b8-e332bf6846fe" Dec 13 13:33:21.224356 kubelet[3413]: I1213 13:33:21.223194 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05e34531ee02562434315e5486898af33952f14e9aad78adf66cb7fc1e3fc784" Dec 13 13:33:21.225279 containerd[1709]: time="2024-12-13T13:33:21.225236636Z" level=info msg="StopPodSandbox for \"05e34531ee02562434315e5486898af33952f14e9aad78adf66cb7fc1e3fc784\"" Dec 13 13:33:21.225536 containerd[1709]: time="2024-12-13T13:33:21.225495642Z" level=info msg="Ensure that sandbox 05e34531ee02562434315e5486898af33952f14e9aad78adf66cb7fc1e3fc784 in task-service has been cleanup successfully" Dec 13 13:33:21.226523 containerd[1709]: time="2024-12-13T13:33:21.226493469Z" level=info msg="TearDown network for sandbox \"05e34531ee02562434315e5486898af33952f14e9aad78adf66cb7fc1e3fc784\" successfully" Dec 13 13:33:21.226523 containerd[1709]: time="2024-12-13T13:33:21.226521669Z" level=info msg="StopPodSandbox for \"05e34531ee02562434315e5486898af33952f14e9aad78adf66cb7fc1e3fc784\" returns successfully" Dec 13 13:33:21.228602 containerd[1709]: time="2024-12-13T13:33:21.228572723Z" level=info msg="StopPodSandbox for \"de13cff2245dc4ed18a8c3f6aefb9f47ba419bb6e2c94459fc2257768b9e93cb\"" Dec 13 13:33:21.228994 containerd[1709]: time="2024-12-13T13:33:21.228671526Z" level=info msg="TearDown network for sandbox \"de13cff2245dc4ed18a8c3f6aefb9f47ba419bb6e2c94459fc2257768b9e93cb\" successfully" Dec 13 13:33:21.228994 containerd[1709]: time="2024-12-13T13:33:21.228691127Z" level=info msg="StopPodSandbox for \"de13cff2245dc4ed18a8c3f6aefb9f47ba419bb6e2c94459fc2257768b9e93cb\" returns successfully" Dec 13 13:33:21.229922 containerd[1709]: time="2024-12-13T13:33:21.229678253Z" level=info msg="StopPodSandbox for \"0caaba6b9e21afd1ab3d7d61cc1a7e6ab2ecaf97d8934be5fa1ddd0f234a5880\"" Dec 13 13:33:21.229922 containerd[1709]: time="2024-12-13T13:33:21.229822356Z" level=info msg="TearDown network for sandbox \"0caaba6b9e21afd1ab3d7d61cc1a7e6ab2ecaf97d8934be5fa1ddd0f234a5880\" successfully" Dec 13 13:33:21.229922 containerd[1709]: time="2024-12-13T13:33:21.229852557Z" level=info msg="StopPodSandbox for \"0caaba6b9e21afd1ab3d7d61cc1a7e6ab2ecaf97d8934be5fa1ddd0f234a5880\" returns successfully" Dec 13 13:33:21.230508 containerd[1709]: time="2024-12-13T13:33:21.230477474Z" level=info msg="StopPodSandbox for \"93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204\"" Dec 13 13:33:21.230704 containerd[1709]: time="2024-12-13T13:33:21.230588776Z" level=info msg="TearDown network for sandbox \"93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204\" successfully" Dec 13 13:33:21.230704 containerd[1709]: time="2024-12-13T13:33:21.230626277Z" level=info msg="StopPodSandbox for \"93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204\" returns successfully" Dec 13 13:33:21.232661 containerd[1709]: time="2024-12-13T13:33:21.231889811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5f78578d-dhdkn,Uid:e90719d9-2bf9-4651-a5b8-e332bf6846fe,Namespace:calico-apiserver,Attempt:5,}" Dec 13 13:33:21.240386 containerd[1709]: time="2024-12-13T13:33:21.240355134Z" level=info msg="CreateContainer within sandbox \"71bb6a202b3ea3dfecad0e4e7900674bbbab9454d27304ecdd8a01002789e33f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"843f11f7b4c9bcd646bbf654fb595beef818af7ae9c2848e50d7d8ee5fc8c06f\"" Dec 13 13:33:21.245386 containerd[1709]: time="2024-12-13T13:33:21.245353365Z" level=info msg="StartContainer for \"843f11f7b4c9bcd646bbf654fb595beef818af7ae9c2848e50d7d8ee5fc8c06f\"" Dec 13 13:33:21.322024 systemd[1]: Started cri-containerd-843f11f7b4c9bcd646bbf654fb595beef818af7ae9c2848e50d7d8ee5fc8c06f.scope - libcontainer container 843f11f7b4c9bcd646bbf654fb595beef818af7ae9c2848e50d7d8ee5fc8c06f. Dec 13 13:33:21.395802 containerd[1709]: time="2024-12-13T13:33:21.395646620Z" level=error msg="Failed to destroy network for sandbox \"ea7ce7a3927f9ab640232557df55fbe327e40905eb08afd3b9626766466e8bcb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:21.396773 containerd[1709]: time="2024-12-13T13:33:21.396022930Z" level=error msg="encountered an error cleaning up failed sandbox \"ea7ce7a3927f9ab640232557df55fbe327e40905eb08afd3b9626766466e8bcb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:21.396773 containerd[1709]: time="2024-12-13T13:33:21.396139333Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxst4,Uid:d48882ed-a3fb-4cc6-a051-3acab30e260b,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"ea7ce7a3927f9ab640232557df55fbe327e40905eb08afd3b9626766466e8bcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:21.396929 kubelet[3413]: E1213 13:33:21.396677 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea7ce7a3927f9ab640232557df55fbe327e40905eb08afd3b9626766466e8bcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:21.396929 kubelet[3413]: E1213 13:33:21.396770 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea7ce7a3927f9ab640232557df55fbe327e40905eb08afd3b9626766466e8bcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nxst4" Dec 13 13:33:21.396929 kubelet[3413]: E1213 13:33:21.396795 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea7ce7a3927f9ab640232557df55fbe327e40905eb08afd3b9626766466e8bcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nxst4" Dec 13 13:33:21.397039 kubelet[3413]: E1213 13:33:21.396847 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nxst4_kube-system(d48882ed-a3fb-4cc6-a051-3acab30e260b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nxst4_kube-system(d48882ed-a3fb-4cc6-a051-3acab30e260b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea7ce7a3927f9ab640232557df55fbe327e40905eb08afd3b9626766466e8bcb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nxst4" podUID="d48882ed-a3fb-4cc6-a051-3acab30e260b" Dec 13 13:33:21.427502 containerd[1709]: time="2024-12-13T13:33:21.427441357Z" level=info msg="StartContainer for \"843f11f7b4c9bcd646bbf654fb595beef818af7ae9c2848e50d7d8ee5fc8c06f\" returns successfully" Dec 13 13:33:21.448718 containerd[1709]: time="2024-12-13T13:33:21.448585714Z" level=error msg="Failed to destroy network for sandbox \"5a5d07b3cce135bf70e5eeb5232605e4fd9f4b422d2822964313cae9dfea8cb0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:21.449280 containerd[1709]: time="2024-12-13T13:33:21.449125928Z" level=error msg="encountered an error cleaning up failed sandbox \"5a5d07b3cce135bf70e5eeb5232605e4fd9f4b422d2822964313cae9dfea8cb0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:21.449280 containerd[1709]: time="2024-12-13T13:33:21.449209830Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5f78578d-x98kx,Uid:4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"5a5d07b3cce135bf70e5eeb5232605e4fd9f4b422d2822964313cae9dfea8cb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:21.450184 kubelet[3413]: E1213 13:33:21.450023 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a5d07b3cce135bf70e5eeb5232605e4fd9f4b422d2822964313cae9dfea8cb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:21.450184 kubelet[3413]: E1213 13:33:21.450102 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a5d07b3cce135bf70e5eeb5232605e4fd9f4b422d2822964313cae9dfea8cb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5f78578d-x98kx" Dec 13 13:33:21.450184 kubelet[3413]: E1213 13:33:21.450137 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a5d07b3cce135bf70e5eeb5232605e4fd9f4b422d2822964313cae9dfea8cb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5f78578d-x98kx" Dec 13 13:33:21.450367 kubelet[3413]: E1213 13:33:21.450198 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c5f78578d-x98kx_calico-apiserver(4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c5f78578d-x98kx_calico-apiserver(4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a5d07b3cce135bf70e5eeb5232605e4fd9f4b422d2822964313cae9dfea8cb0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5f78578d-x98kx" podUID="4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef" Dec 13 13:33:21.463769 containerd[1709]: time="2024-12-13T13:33:21.461064442Z" level=error msg="Failed to destroy network for sandbox \"79afadad72dced73fe280c9705ede4a852fd434d620b260f3225ccde84e8c424\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:21.463769 containerd[1709]: time="2024-12-13T13:33:21.461440452Z" level=error msg="encountered an error cleaning up failed sandbox \"79afadad72dced73fe280c9705ede4a852fd434d620b260f3225ccde84e8c424\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:21.463769 containerd[1709]: time="2024-12-13T13:33:21.461512254Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7zsr,Uid:5e38b74a-209a-4cd3-be7c-117000f59938,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"79afadad72dced73fe280c9705ede4a852fd434d620b260f3225ccde84e8c424\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:21.463963 kubelet[3413]: E1213 13:33:21.461752 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79afadad72dced73fe280c9705ede4a852fd434d620b260f3225ccde84e8c424\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:21.463963 kubelet[3413]: E1213 13:33:21.461831 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79afadad72dced73fe280c9705ede4a852fd434d620b260f3225ccde84e8c424\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7zsr" Dec 13 13:33:21.463963 kubelet[3413]: E1213 13:33:21.461886 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79afadad72dced73fe280c9705ede4a852fd434d620b260f3225ccde84e8c424\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7zsr" Dec 13 13:33:21.464127 kubelet[3413]: E1213 13:33:21.462035 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l7zsr_calico-system(5e38b74a-209a-4cd3-be7c-117000f59938)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l7zsr_calico-system(5e38b74a-209a-4cd3-be7c-117000f59938)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79afadad72dced73fe280c9705ede4a852fd434d620b260f3225ccde84e8c424\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l7zsr" podUID="5e38b74a-209a-4cd3-be7c-117000f59938" Dec 13 13:33:21.521180 containerd[1709]: time="2024-12-13T13:33:21.521124123Z" level=error msg="Failed to destroy network for sandbox \"837a2f417584cd8e9486506f5513ce7d7c5c6deecba95564b25a913716e5f716\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:21.524798 containerd[1709]: time="2024-12-13T13:33:21.522796967Z" level=error msg="encountered an error cleaning up failed sandbox \"837a2f417584cd8e9486506f5513ce7d7c5c6deecba95564b25a913716e5f716\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:21.524798 containerd[1709]: time="2024-12-13T13:33:21.522985472Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-88gf8,Uid:3098ed4c-c400-4c97-958d-d1930afff8ed,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"837a2f417584cd8e9486506f5513ce7d7c5c6deecba95564b25a913716e5f716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:21.524972 kubelet[3413]: E1213 13:33:21.523404 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"837a2f417584cd8e9486506f5513ce7d7c5c6deecba95564b25a913716e5f716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:21.524972 kubelet[3413]: E1213 13:33:21.523483 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"837a2f417584cd8e9486506f5513ce7d7c5c6deecba95564b25a913716e5f716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-88gf8" Dec 13 13:33:21.524972 kubelet[3413]: E1213 13:33:21.523510 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"837a2f417584cd8e9486506f5513ce7d7c5c6deecba95564b25a913716e5f716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-88gf8" Dec 13 13:33:21.525121 kubelet[3413]: E1213 13:33:21.523565 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-88gf8_kube-system(3098ed4c-c400-4c97-958d-d1930afff8ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-88gf8_kube-system(3098ed4c-c400-4c97-958d-d1930afff8ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"837a2f417584cd8e9486506f5513ce7d7c5c6deecba95564b25a913716e5f716\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-88gf8" podUID="3098ed4c-c400-4c97-958d-d1930afff8ed" Dec 13 13:33:21.532190 containerd[1709]: time="2024-12-13T13:33:21.532132212Z" level=error msg="Failed to destroy network for sandbox \"84951abfc4cbe4cdd6984abfabfcf1ff2baac47169cc0028add8cdc28ccbe7c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:21.532741 containerd[1709]: time="2024-12-13T13:33:21.532698427Z" level=error msg="encountered an error cleaning up failed sandbox \"84951abfc4cbe4cdd6984abfabfcf1ff2baac47169cc0028add8cdc28ccbe7c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:21.532951 containerd[1709]: time="2024-12-13T13:33:21.532906733Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cbfd9d889-rxm9h,Uid:0c845263-e633-4055-81f9-4aa28ad32b74,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"84951abfc4cbe4cdd6984abfabfcf1ff2baac47169cc0028add8cdc28ccbe7c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:21.534227 kubelet[3413]: E1213 13:33:21.533290 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84951abfc4cbe4cdd6984abfabfcf1ff2baac47169cc0028add8cdc28ccbe7c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:21.534227 kubelet[3413]: E1213 13:33:21.533350 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84951abfc4cbe4cdd6984abfabfcf1ff2baac47169cc0028add8cdc28ccbe7c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cbfd9d889-rxm9h" Dec 13 13:33:21.534227 kubelet[3413]: E1213 13:33:21.533377 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84951abfc4cbe4cdd6984abfabfcf1ff2baac47169cc0028add8cdc28ccbe7c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cbfd9d889-rxm9h" Dec 13 13:33:21.534419 kubelet[3413]: E1213 13:33:21.533429 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5cbfd9d889-rxm9h_calico-system(0c845263-e633-4055-81f9-4aa28ad32b74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5cbfd9d889-rxm9h_calico-system(0c845263-e633-4055-81f9-4aa28ad32b74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84951abfc4cbe4cdd6984abfabfcf1ff2baac47169cc0028add8cdc28ccbe7c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5cbfd9d889-rxm9h" podUID="0c845263-e633-4055-81f9-4aa28ad32b74" Dec 13 13:33:21.539895 containerd[1709]: time="2024-12-13T13:33:21.539848716Z" level=error msg="Failed to destroy network for sandbox \"1ddc4d8100ab2bdb82d2f5ac56fc17925f43d5f77f4b02f3d9de4fdb4aad0482\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:21.540116 containerd[1709]: time="2024-12-13T13:33:21.540090622Z" level=error msg="encountered an error cleaning up failed sandbox \"1ddc4d8100ab2bdb82d2f5ac56fc17925f43d5f77f4b02f3d9de4fdb4aad0482\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:21.540184 containerd[1709]: time="2024-12-13T13:33:21.540152324Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5f78578d-dhdkn,Uid:e90719d9-2bf9-4651-a5b8-e332bf6846fe,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"1ddc4d8100ab2bdb82d2f5ac56fc17925f43d5f77f4b02f3d9de4fdb4aad0482\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:21.540373 kubelet[3413]: E1213 13:33:21.540347 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ddc4d8100ab2bdb82d2f5ac56fc17925f43d5f77f4b02f3d9de4fdb4aad0482\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:21.540439 kubelet[3413]: E1213 13:33:21.540396 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ddc4d8100ab2bdb82d2f5ac56fc17925f43d5f77f4b02f3d9de4fdb4aad0482\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5f78578d-dhdkn" Dec 13 13:33:21.540439 kubelet[3413]: E1213 13:33:21.540425 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ddc4d8100ab2bdb82d2f5ac56fc17925f43d5f77f4b02f3d9de4fdb4aad0482\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5f78578d-dhdkn" Dec 13 13:33:21.540516 kubelet[3413]: E1213 13:33:21.540478 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c5f78578d-dhdkn_calico-apiserver(e90719d9-2bf9-4651-a5b8-e332bf6846fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c5f78578d-dhdkn_calico-apiserver(e90719d9-2bf9-4651-a5b8-e332bf6846fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ddc4d8100ab2bdb82d2f5ac56fc17925f43d5f77f4b02f3d9de4fdb4aad0482\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5f78578d-dhdkn" podUID="e90719d9-2bf9-4651-a5b8-e332bf6846fe" Dec 13 13:33:21.609193 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ea7ce7a3927f9ab640232557df55fbe327e40905eb08afd3b9626766466e8bcb-shm.mount: Deactivated successfully. Dec 13 13:33:21.609310 systemd[1]: run-netns-cni\x2dbd70b237\x2d9209\x2d117c\x2d371e\x2d8ae50d3e6aa4.mount: Deactivated successfully. Dec 13 13:33:21.609387 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-05e34531ee02562434315e5486898af33952f14e9aad78adf66cb7fc1e3fc784-shm.mount: Deactivated successfully. Dec 13 13:33:21.923362 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 13:33:21.923510 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 13:33:22.229501 kubelet[3413]: I1213 13:33:22.229376 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ddc4d8100ab2bdb82d2f5ac56fc17925f43d5f77f4b02f3d9de4fdb4aad0482" Dec 13 13:33:22.231824 containerd[1709]: time="2024-12-13T13:33:22.231629122Z" level=info msg="StopPodSandbox for \"1ddc4d8100ab2bdb82d2f5ac56fc17925f43d5f77f4b02f3d9de4fdb4aad0482\"" Dec 13 13:33:22.232145 containerd[1709]: time="2024-12-13T13:33:22.232065833Z" level=info msg="Ensure that sandbox 1ddc4d8100ab2bdb82d2f5ac56fc17925f43d5f77f4b02f3d9de4fdb4aad0482 in task-service has been cleanup successfully" Dec 13 13:33:22.236770 containerd[1709]: time="2024-12-13T13:33:22.233233864Z" level=info msg="TearDown network for sandbox \"1ddc4d8100ab2bdb82d2f5ac56fc17925f43d5f77f4b02f3d9de4fdb4aad0482\" successfully" Dec 13 13:33:22.236770 containerd[1709]: time="2024-12-13T13:33:22.233304466Z" level=info msg="StopPodSandbox for \"1ddc4d8100ab2bdb82d2f5ac56fc17925f43d5f77f4b02f3d9de4fdb4aad0482\" returns successfully" Dec 13 13:33:22.236576 systemd[1]: run-netns-cni\x2df92b8c52\x2d80dc\x2d5fde\x2dc616\x2d6b731958c0a9.mount: Deactivated successfully. Dec 13 13:33:22.237462 containerd[1709]: time="2024-12-13T13:33:22.237176068Z" level=info msg="StopPodSandbox for \"05e34531ee02562434315e5486898af33952f14e9aad78adf66cb7fc1e3fc784\"" Dec 13 13:33:22.237462 containerd[1709]: time="2024-12-13T13:33:22.237267770Z" level=info msg="TearDown network for sandbox \"05e34531ee02562434315e5486898af33952f14e9aad78adf66cb7fc1e3fc784\" successfully" Dec 13 13:33:22.237462 containerd[1709]: time="2024-12-13T13:33:22.237280570Z" level=info msg="StopPodSandbox for \"05e34531ee02562434315e5486898af33952f14e9aad78adf66cb7fc1e3fc784\" returns successfully" Dec 13 13:33:22.240080 containerd[1709]: time="2024-12-13T13:33:22.239898239Z" level=info msg="StopPodSandbox for \"de13cff2245dc4ed18a8c3f6aefb9f47ba419bb6e2c94459fc2257768b9e93cb\"" Dec 13 13:33:22.240080 containerd[1709]: time="2024-12-13T13:33:22.239991842Z" level=info msg="TearDown network for sandbox \"de13cff2245dc4ed18a8c3f6aefb9f47ba419bb6e2c94459fc2257768b9e93cb\" successfully" Dec 13 13:33:22.240080 containerd[1709]: time="2024-12-13T13:33:22.240006242Z" level=info msg="StopPodSandbox for \"de13cff2245dc4ed18a8c3f6aefb9f47ba419bb6e2c94459fc2257768b9e93cb\" returns successfully" Dec 13 13:33:22.240258 kubelet[3413]: I1213 13:33:22.240219 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79afadad72dced73fe280c9705ede4a852fd434d620b260f3225ccde84e8c424" Dec 13 13:33:22.240832 containerd[1709]: time="2024-12-13T13:33:22.240531656Z" level=info msg="StopPodSandbox for \"0caaba6b9e21afd1ab3d7d61cc1a7e6ab2ecaf97d8934be5fa1ddd0f234a5880\"" Dec 13 13:33:22.240832 containerd[1709]: time="2024-12-13T13:33:22.240614058Z" level=info msg="TearDown network for sandbox \"0caaba6b9e21afd1ab3d7d61cc1a7e6ab2ecaf97d8934be5fa1ddd0f234a5880\" successfully" Dec 13 13:33:22.240832 containerd[1709]: time="2024-12-13T13:33:22.240630359Z" level=info msg="StopPodSandbox for \"0caaba6b9e21afd1ab3d7d61cc1a7e6ab2ecaf97d8934be5fa1ddd0f234a5880\" returns successfully" Dec 13 13:33:22.241186 containerd[1709]: time="2024-12-13T13:33:22.241082271Z" level=info msg="StopPodSandbox for \"93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204\"" Dec 13 13:33:22.241186 containerd[1709]: time="2024-12-13T13:33:22.241131672Z" level=info msg="StopPodSandbox for \"79afadad72dced73fe280c9705ede4a852fd434d620b260f3225ccde84e8c424\"" Dec 13 13:33:22.241186 containerd[1709]: time="2024-12-13T13:33:22.241176673Z" level=info msg="TearDown network for sandbox \"93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204\" successfully" Dec 13 13:33:22.241346 containerd[1709]: time="2024-12-13T13:33:22.241191173Z" level=info msg="StopPodSandbox for \"93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204\" returns successfully" Dec 13 13:33:22.241901 containerd[1709]: time="2024-12-13T13:33:22.241715787Z" level=info msg="Ensure that sandbox 79afadad72dced73fe280c9705ede4a852fd434d620b260f3225ccde84e8c424 in task-service has been cleanup successfully" Dec 13 13:33:22.242039 containerd[1709]: time="2024-12-13T13:33:22.241973994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5f78578d-dhdkn,Uid:e90719d9-2bf9-4651-a5b8-e332bf6846fe,Namespace:calico-apiserver,Attempt:6,}" Dec 13 13:33:22.242265 containerd[1709]: time="2024-12-13T13:33:22.242166499Z" level=info msg="TearDown network for sandbox \"79afadad72dced73fe280c9705ede4a852fd434d620b260f3225ccde84e8c424\" successfully" Dec 13 13:33:22.242265 containerd[1709]: time="2024-12-13T13:33:22.242201800Z" level=info msg="StopPodSandbox for \"79afadad72dced73fe280c9705ede4a852fd434d620b260f3225ccde84e8c424\" returns successfully" Dec 13 13:33:22.245153 containerd[1709]: time="2024-12-13T13:33:22.244980873Z" level=info msg="StopPodSandbox for \"a68eaa09bed8efdfca6f6157424c2428010a477d8ffb0eace0fdccc76c3c8e3a\"" Dec 13 13:33:22.245153 containerd[1709]: time="2024-12-13T13:33:22.245068275Z" level=info msg="TearDown network for sandbox \"a68eaa09bed8efdfca6f6157424c2428010a477d8ffb0eace0fdccc76c3c8e3a\" successfully" Dec 13 13:33:22.245153 containerd[1709]: time="2024-12-13T13:33:22.245084576Z" level=info msg="StopPodSandbox for \"a68eaa09bed8efdfca6f6157424c2428010a477d8ffb0eace0fdccc76c3c8e3a\" returns successfully" Dec 13 13:33:22.246419 systemd[1]: run-netns-cni\x2d3aabe5bd\x2dfb9d\x2dce3b\x2dba21\x2dec20f22cde26.mount: Deactivated successfully. Dec 13 13:33:22.246875 containerd[1709]: time="2024-12-13T13:33:22.246583715Z" level=info msg="StopPodSandbox for \"a931b2d75904a11f9be88ab0a046dc459cdcf2143c5085ead224e44a863501e0\"" Dec 13 13:33:22.246875 containerd[1709]: time="2024-12-13T13:33:22.246670818Z" level=info msg="TearDown network for sandbox \"a931b2d75904a11f9be88ab0a046dc459cdcf2143c5085ead224e44a863501e0\" successfully" Dec 13 13:33:22.246875 containerd[1709]: time="2024-12-13T13:33:22.246684618Z" level=info msg="StopPodSandbox for \"a931b2d75904a11f9be88ab0a046dc459cdcf2143c5085ead224e44a863501e0\" returns successfully" Dec 13 13:33:22.247808 containerd[1709]: time="2024-12-13T13:33:22.247562841Z" level=info msg="StopPodSandbox for \"97104bf9c307ad7f13a63a015e8e624b4ff70ad421114bf1a3c93bc55db9e874\"" Dec 13 13:33:22.247808 containerd[1709]: time="2024-12-13T13:33:22.247679544Z" level=info msg="TearDown network for sandbox \"97104bf9c307ad7f13a63a015e8e624b4ff70ad421114bf1a3c93bc55db9e874\" successfully" Dec 13 13:33:22.247808 containerd[1709]: time="2024-12-13T13:33:22.247714645Z" level=info msg="StopPodSandbox for \"97104bf9c307ad7f13a63a015e8e624b4ff70ad421114bf1a3c93bc55db9e874\" returns successfully" Dec 13 13:33:22.248200 kubelet[3413]: I1213 13:33:22.248165 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea7ce7a3927f9ab640232557df55fbe327e40905eb08afd3b9626766466e8bcb" Dec 13 13:33:22.250545 containerd[1709]: time="2024-12-13T13:33:22.248925177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7zsr,Uid:5e38b74a-209a-4cd3-be7c-117000f59938,Namespace:calico-system,Attempt:5,}" Dec 13 13:33:22.250545 containerd[1709]: time="2024-12-13T13:33:22.249266086Z" level=info msg="StopPodSandbox for \"ea7ce7a3927f9ab640232557df55fbe327e40905eb08afd3b9626766466e8bcb\"" Dec 13 13:33:22.250545 containerd[1709]: time="2024-12-13T13:33:22.249556994Z" level=info msg="Ensure that sandbox ea7ce7a3927f9ab640232557df55fbe327e40905eb08afd3b9626766466e8bcb in task-service has been cleanup successfully" Dec 13 13:33:22.251196 containerd[1709]: time="2024-12-13T13:33:22.251170736Z" level=info msg="TearDown network for sandbox \"ea7ce7a3927f9ab640232557df55fbe327e40905eb08afd3b9626766466e8bcb\" successfully" Dec 13 13:33:22.251196 containerd[1709]: time="2024-12-13T13:33:22.251194237Z" level=info msg="StopPodSandbox for \"ea7ce7a3927f9ab640232557df55fbe327e40905eb08afd3b9626766466e8bcb\" returns successfully" Dec 13 13:33:22.253739 containerd[1709]: time="2024-12-13T13:33:22.251897455Z" level=info msg="StopPodSandbox for \"da1e69f52601258dcc46c0bd5c0c6406a9cf7411da2e2992a4c7d9f1a24435fe\"" Dec 13 13:33:22.253739 containerd[1709]: time="2024-12-13T13:33:22.252002358Z" level=info msg="TearDown network for sandbox \"da1e69f52601258dcc46c0bd5c0c6406a9cf7411da2e2992a4c7d9f1a24435fe\" successfully" Dec 13 13:33:22.253739 containerd[1709]: time="2024-12-13T13:33:22.252016158Z" level=info msg="StopPodSandbox for \"da1e69f52601258dcc46c0bd5c0c6406a9cf7411da2e2992a4c7d9f1a24435fe\" returns successfully" Dec 13 13:33:22.256993 systemd[1]: run-netns-cni\x2dc71de607\x2dc9f5\x2dc54a\x2d6e85\x2d9c34403e254e.mount: Deactivated successfully. Dec 13 13:33:22.259515 containerd[1709]: time="2024-12-13T13:33:22.259158646Z" level=info msg="StopPodSandbox for \"2eecb83c5ad9afe1955da766d7ee59454b23acf7dec99b382c39b37b867da2e9\"" Dec 13 13:33:22.262600 containerd[1709]: time="2024-12-13T13:33:22.262558136Z" level=info msg="TearDown network for sandbox \"2eecb83c5ad9afe1955da766d7ee59454b23acf7dec99b382c39b37b867da2e9\" successfully" Dec 13 13:33:22.262600 containerd[1709]: time="2024-12-13T13:33:22.262584436Z" level=info msg="StopPodSandbox for \"2eecb83c5ad9afe1955da766d7ee59454b23acf7dec99b382c39b37b867da2e9\" returns successfully" Dec 13 13:33:22.270580 containerd[1709]: time="2024-12-13T13:33:22.270416943Z" level=info msg="StopPodSandbox for \"07baee6b93d4ebddbb437e721964155527bf842caa5b61e8c5b8d83dd75239fb\"" Dec 13 13:33:22.271383 containerd[1709]: time="2024-12-13T13:33:22.271328467Z" level=info msg="TearDown network for sandbox \"07baee6b93d4ebddbb437e721964155527bf842caa5b61e8c5b8d83dd75239fb\" successfully" Dec 13 13:33:22.272036 containerd[1709]: time="2024-12-13T13:33:22.271639275Z" level=info msg="StopPodSandbox for \"07baee6b93d4ebddbb437e721964155527bf842caa5b61e8c5b8d83dd75239fb\" returns successfully" Dec 13 13:33:22.278790 containerd[1709]: time="2024-12-13T13:33:22.276681807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxst4,Uid:d48882ed-a3fb-4cc6-a051-3acab30e260b,Namespace:kube-system,Attempt:5,}" Dec 13 13:33:22.283109 kubelet[3413]: I1213 13:33:22.283073 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a5d07b3cce135bf70e5eeb5232605e4fd9f4b422d2822964313cae9dfea8cb0" Dec 13 13:33:22.284322 containerd[1709]: time="2024-12-13T13:33:22.284280707Z" level=info msg="StopPodSandbox for \"5a5d07b3cce135bf70e5eeb5232605e4fd9f4b422d2822964313cae9dfea8cb0\"" Dec 13 13:33:22.285137 containerd[1709]: time="2024-12-13T13:33:22.284986526Z" level=info msg="Ensure that sandbox 5a5d07b3cce135bf70e5eeb5232605e4fd9f4b422d2822964313cae9dfea8cb0 in task-service has been cleanup successfully" Dec 13 13:33:22.288433 containerd[1709]: time="2024-12-13T13:33:22.288356315Z" level=info msg="TearDown network for sandbox \"5a5d07b3cce135bf70e5eeb5232605e4fd9f4b422d2822964313cae9dfea8cb0\" successfully" Dec 13 13:33:22.288433 containerd[1709]: time="2024-12-13T13:33:22.288380715Z" level=info msg="StopPodSandbox for \"5a5d07b3cce135bf70e5eeb5232605e4fd9f4b422d2822964313cae9dfea8cb0\" returns successfully" Dec 13 13:33:22.291026 systemd[1]: run-netns-cni\x2d3c342a2a\x2d5f75\x2d4d81\x2d9ed2\x2da31cf4245539.mount: Deactivated successfully. Dec 13 13:33:22.293679 containerd[1709]: time="2024-12-13T13:33:22.293321745Z" level=info msg="StopPodSandbox for \"b4fa4166fc19ff13508cef89fc2c7441c7423198ceac24c7de4d881d946e7ac1\"" Dec 13 13:33:22.294917 kubelet[3413]: I1213 13:33:22.294363 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-f9x9t" podStartSLOduration=4.830385841 podStartE2EDuration="51.294327972s" podCreationTimestamp="2024-12-13 13:32:31 +0000 UTC" firstStartedPulling="2024-12-13 13:32:33.378947924 +0000 UTC m=+27.750085102" lastFinishedPulling="2024-12-13 13:33:19.842889855 +0000 UTC m=+74.214027233" observedRunningTime="2024-12-13 13:33:22.289988858 +0000 UTC m=+76.661126036" watchObservedRunningTime="2024-12-13 13:33:22.294327972 +0000 UTC m=+76.665465050" Dec 13 13:33:22.295532 containerd[1709]: time="2024-12-13T13:33:22.295317898Z" level=info msg="TearDown network for sandbox \"b4fa4166fc19ff13508cef89fc2c7441c7423198ceac24c7de4d881d946e7ac1\" successfully" Dec 13 13:33:22.295661 containerd[1709]: time="2024-12-13T13:33:22.295641806Z" level=info msg="StopPodSandbox for \"b4fa4166fc19ff13508cef89fc2c7441c7423198ceac24c7de4d881d946e7ac1\" returns successfully" Dec 13 13:33:22.296295 containerd[1709]: time="2024-12-13T13:33:22.296271523Z" level=info msg="StopPodSandbox for \"1229e06291e03b9fff5dbdc37a9003f6c3f71b000e4f51460c46248e82bab707\"" Dec 13 13:33:22.298266 containerd[1709]: time="2024-12-13T13:33:22.298239575Z" level=info msg="TearDown network for sandbox \"1229e06291e03b9fff5dbdc37a9003f6c3f71b000e4f51460c46248e82bab707\" successfully" Dec 13 13:33:22.298343 containerd[1709]: time="2024-12-13T13:33:22.298266275Z" level=info msg="StopPodSandbox for \"1229e06291e03b9fff5dbdc37a9003f6c3f71b000e4f51460c46248e82bab707\" returns successfully" Dec 13 13:33:22.299599 containerd[1709]: time="2024-12-13T13:33:22.299574110Z" level=info msg="StopPodSandbox for \"ba1103f40af78d6d15f4673903e9e1a8c1f4422c2a8decd82d4e340b7801de26\"" Dec 13 13:33:22.299758 containerd[1709]: time="2024-12-13T13:33:22.299692413Z" level=info msg="TearDown network for sandbox \"ba1103f40af78d6d15f4673903e9e1a8c1f4422c2a8decd82d4e340b7801de26\" successfully" Dec 13 13:33:22.299758 containerd[1709]: time="2024-12-13T13:33:22.299711114Z" level=info msg="StopPodSandbox for \"ba1103f40af78d6d15f4673903e9e1a8c1f4422c2a8decd82d4e340b7801de26\" returns successfully" Dec 13 13:33:22.300112 containerd[1709]: time="2024-12-13T13:33:22.299987821Z" level=info msg="StopPodSandbox for \"a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d\"" Dec 13 13:33:22.300181 containerd[1709]: time="2024-12-13T13:33:22.300113224Z" level=info msg="TearDown network for sandbox \"a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d\" successfully" Dec 13 13:33:22.300181 containerd[1709]: time="2024-12-13T13:33:22.300129125Z" level=info msg="StopPodSandbox for \"a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d\" returns successfully" Dec 13 13:33:22.301337 containerd[1709]: time="2024-12-13T13:33:22.300964146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5f78578d-x98kx,Uid:4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef,Namespace:calico-apiserver,Attempt:6,}" Dec 13 13:33:22.302303 kubelet[3413]: I1213 13:33:22.301632 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="837a2f417584cd8e9486506f5513ce7d7c5c6deecba95564b25a913716e5f716" Dec 13 13:33:22.303846 containerd[1709]: time="2024-12-13T13:33:22.303820822Z" level=info msg="StopPodSandbox for \"837a2f417584cd8e9486506f5513ce7d7c5c6deecba95564b25a913716e5f716\"" Dec 13 13:33:22.304182 containerd[1709]: time="2024-12-13T13:33:22.304148330Z" level=info msg="Ensure that sandbox 837a2f417584cd8e9486506f5513ce7d7c5c6deecba95564b25a913716e5f716 in task-service has been cleanup successfully" Dec 13 13:33:22.304594 containerd[1709]: time="2024-12-13T13:33:22.304438438Z" level=info msg="TearDown network for sandbox \"837a2f417584cd8e9486506f5513ce7d7c5c6deecba95564b25a913716e5f716\" successfully" Dec 13 13:33:22.304709 containerd[1709]: time="2024-12-13T13:33:22.304686044Z" level=info msg="StopPodSandbox for \"837a2f417584cd8e9486506f5513ce7d7c5c6deecba95564b25a913716e5f716\" returns successfully" Dec 13 13:33:22.305416 containerd[1709]: time="2024-12-13T13:33:22.305192458Z" level=info msg="StopPodSandbox for \"eb5f89087b2211df22d16b4b974229ed850c857511d99558c5d39503dd61034d\"" Dec 13 13:33:22.305416 containerd[1709]: time="2024-12-13T13:33:22.305292760Z" level=info msg="TearDown network for sandbox \"eb5f89087b2211df22d16b4b974229ed850c857511d99558c5d39503dd61034d\" successfully" Dec 13 13:33:22.305416 containerd[1709]: time="2024-12-13T13:33:22.305309561Z" level=info msg="StopPodSandbox for \"eb5f89087b2211df22d16b4b974229ed850c857511d99558c5d39503dd61034d\" returns successfully" Dec 13 13:33:22.306136 containerd[1709]: time="2024-12-13T13:33:22.306111182Z" level=info msg="StopPodSandbox for \"196a9918fd8a456ad08be866ebdfde331d9ff4ab4265cf2ee04e76934e892869\"" Dec 13 13:33:22.306714 containerd[1709]: time="2024-12-13T13:33:22.306308387Z" level=info msg="TearDown network for sandbox \"196a9918fd8a456ad08be866ebdfde331d9ff4ab4265cf2ee04e76934e892869\" successfully" Dec 13 13:33:22.306714 containerd[1709]: time="2024-12-13T13:33:22.306327488Z" level=info msg="StopPodSandbox for \"196a9918fd8a456ad08be866ebdfde331d9ff4ab4265cf2ee04e76934e892869\" returns successfully" Dec 13 13:33:22.306931 containerd[1709]: time="2024-12-13T13:33:22.306907703Z" level=info msg="StopPodSandbox for \"9a58c99c21e5816afadb4c83f1b888827e5712ec38030fec9f24c5e6e005d3e7\"" Dec 13 13:33:22.307388 containerd[1709]: time="2024-12-13T13:33:22.307112608Z" level=info msg="TearDown network for sandbox \"9a58c99c21e5816afadb4c83f1b888827e5712ec38030fec9f24c5e6e005d3e7\" successfully" Dec 13 13:33:22.307388 containerd[1709]: time="2024-12-13T13:33:22.307238712Z" level=info msg="StopPodSandbox for \"9a58c99c21e5816afadb4c83f1b888827e5712ec38030fec9f24c5e6e005d3e7\" returns successfully" Dec 13 13:33:22.307963 containerd[1709]: time="2024-12-13T13:33:22.307935930Z" level=info msg="StopPodSandbox for \"54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30\"" Dec 13 13:33:22.308292 containerd[1709]: time="2024-12-13T13:33:22.308249338Z" level=info msg="TearDown network for sandbox \"54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30\" successfully" Dec 13 13:33:22.308292 containerd[1709]: time="2024-12-13T13:33:22.308271439Z" level=info msg="StopPodSandbox for \"54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30\" returns successfully" Dec 13 13:33:22.308712 kubelet[3413]: I1213 13:33:22.308692 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84951abfc4cbe4cdd6984abfabfcf1ff2baac47169cc0028add8cdc28ccbe7c6" Dec 13 13:33:22.309866 containerd[1709]: time="2024-12-13T13:33:22.309741477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-88gf8,Uid:3098ed4c-c400-4c97-958d-d1930afff8ed,Namespace:kube-system,Attempt:6,}" Dec 13 13:33:22.311791 containerd[1709]: time="2024-12-13T13:33:22.310396095Z" level=info msg="StopPodSandbox for \"84951abfc4cbe4cdd6984abfabfcf1ff2baac47169cc0028add8cdc28ccbe7c6\"" Dec 13 13:33:22.312439 containerd[1709]: time="2024-12-13T13:33:22.312325746Z" level=info msg="Ensure that sandbox 84951abfc4cbe4cdd6984abfabfcf1ff2baac47169cc0028add8cdc28ccbe7c6 in task-service has been cleanup successfully" Dec 13 13:33:22.312684 containerd[1709]: time="2024-12-13T13:33:22.312627953Z" level=info msg="TearDown network for sandbox \"84951abfc4cbe4cdd6984abfabfcf1ff2baac47169cc0028add8cdc28ccbe7c6\" successfully" Dec 13 13:33:22.312684 containerd[1709]: time="2024-12-13T13:33:22.312649454Z" level=info msg="StopPodSandbox for \"84951abfc4cbe4cdd6984abfabfcf1ff2baac47169cc0028add8cdc28ccbe7c6\" returns successfully" Dec 13 13:33:22.314026 containerd[1709]: time="2024-12-13T13:33:22.313772984Z" level=info msg="StopPodSandbox for \"3c3a846eabfd72932183004cbf7396154aab7eaf2a31f08fdd56c21c6cf36267\"" Dec 13 13:33:22.314026 containerd[1709]: time="2024-12-13T13:33:22.313875286Z" level=info msg="TearDown network for sandbox \"3c3a846eabfd72932183004cbf7396154aab7eaf2a31f08fdd56c21c6cf36267\" successfully" Dec 13 13:33:22.314026 containerd[1709]: time="2024-12-13T13:33:22.313891587Z" level=info msg="StopPodSandbox for \"3c3a846eabfd72932183004cbf7396154aab7eaf2a31f08fdd56c21c6cf36267\" returns successfully" Dec 13 13:33:22.314481 containerd[1709]: time="2024-12-13T13:33:22.314342199Z" level=info msg="StopPodSandbox for \"342dd7b03388e0d012a00600b5cc7c16bc0230f19e9343cbd40a74c3c9442a41\"" Dec 13 13:33:22.314727 containerd[1709]: time="2024-12-13T13:33:22.314687708Z" level=info msg="TearDown network for sandbox \"342dd7b03388e0d012a00600b5cc7c16bc0230f19e9343cbd40a74c3c9442a41\" successfully" Dec 13 13:33:22.314887 containerd[1709]: time="2024-12-13T13:33:22.314824711Z" level=info msg="StopPodSandbox for \"342dd7b03388e0d012a00600b5cc7c16bc0230f19e9343cbd40a74c3c9442a41\" returns successfully" Dec 13 13:33:22.315291 containerd[1709]: time="2024-12-13T13:33:22.315226322Z" level=info msg="StopPodSandbox for \"3c271ddd843a8733aaa85d7220f846415b6297df5eecf5171d1f5ecae68a2560\"" Dec 13 13:33:22.315440 containerd[1709]: time="2024-12-13T13:33:22.315411827Z" level=info msg="TearDown network for sandbox \"3c271ddd843a8733aaa85d7220f846415b6297df5eecf5171d1f5ecae68a2560\" successfully" Dec 13 13:33:22.315587 containerd[1709]: time="2024-12-13T13:33:22.315513929Z" level=info msg="StopPodSandbox for \"3c271ddd843a8733aaa85d7220f846415b6297df5eecf5171d1f5ecae68a2560\" returns successfully" Dec 13 13:33:22.316101 containerd[1709]: time="2024-12-13T13:33:22.316039643Z" level=info msg="StopPodSandbox for \"704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f\"" Dec 13 13:33:22.316321 containerd[1709]: time="2024-12-13T13:33:22.316299850Z" level=info msg="TearDown network for sandbox \"704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f\" successfully" Dec 13 13:33:22.316433 containerd[1709]: time="2024-12-13T13:33:22.316416853Z" level=info msg="StopPodSandbox for \"704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f\" returns successfully" Dec 13 13:33:22.317910 containerd[1709]: time="2024-12-13T13:33:22.317868491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cbfd9d889-rxm9h,Uid:0c845263-e633-4055-81f9-4aa28ad32b74,Namespace:calico-system,Attempt:6,}" Dec 13 13:33:22.537940 systemd-networkd[1499]: cali124de79cc83: Link UP Dec 13 13:33:22.540192 systemd-networkd[1499]: cali124de79cc83: Gained carrier Dec 13 13:33:22.586766 containerd[1709]: 2024-12-13 13:33:22.374 [INFO][5312] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:33:22.586766 containerd[1709]: 2024-12-13 13:33:22.390 [INFO][5312] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.0.0--a--6a956dd616-k8s-csi--node--driver--l7zsr-eth0 csi-node-driver- calico-system 5e38b74a-209a-4cd3-be7c-117000f59938 643 0 2024-12-13 13:32:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4186.0.0-a-6a956dd616 csi-node-driver-l7zsr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali124de79cc83 [] []}} ContainerID="f34233503d18588cd481c3a2362867e34b2ab2ff9cd615e4ae128b473949c99d" Namespace="calico-system" Pod="csi-node-driver-l7zsr" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-csi--node--driver--l7zsr-" Dec 13 13:33:22.586766 containerd[1709]: 2024-12-13 13:33:22.390 [INFO][5312] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f34233503d18588cd481c3a2362867e34b2ab2ff9cd615e4ae128b473949c99d" Namespace="calico-system" Pod="csi-node-driver-l7zsr" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-csi--node--driver--l7zsr-eth0" Dec 13 13:33:22.586766 containerd[1709]: 2024-12-13 13:33:22.434 [INFO][5334] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f34233503d18588cd481c3a2362867e34b2ab2ff9cd615e4ae128b473949c99d" HandleID="k8s-pod-network.f34233503d18588cd481c3a2362867e34b2ab2ff9cd615e4ae128b473949c99d" Workload="ci--4186.0.0--a--6a956dd616-k8s-csi--node--driver--l7zsr-eth0" Dec 13 13:33:22.586766 containerd[1709]: 2024-12-13 13:33:22.447 [INFO][5334] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f34233503d18588cd481c3a2362867e34b2ab2ff9cd615e4ae128b473949c99d" HandleID="k8s-pod-network.f34233503d18588cd481c3a2362867e34b2ab2ff9cd615e4ae128b473949c99d" Workload="ci--4186.0.0--a--6a956dd616-k8s-csi--node--driver--l7zsr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318af0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4186.0.0-a-6a956dd616", "pod":"csi-node-driver-l7zsr", "timestamp":"2024-12-13 13:33:22.434889371 +0000 UTC"}, Hostname:"ci-4186.0.0-a-6a956dd616", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:33:22.586766 containerd[1709]: 2024-12-13 13:33:22.447 [INFO][5334] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:33:22.586766 containerd[1709]: 2024-12-13 13:33:22.447 [INFO][5334] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:33:22.586766 containerd[1709]: 2024-12-13 13:33:22.447 [INFO][5334] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.0.0-a-6a956dd616' Dec 13 13:33:22.586766 containerd[1709]: 2024-12-13 13:33:22.449 [INFO][5334] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f34233503d18588cd481c3a2362867e34b2ab2ff9cd615e4ae128b473949c99d" host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.586766 containerd[1709]: 2024-12-13 13:33:22.453 [INFO][5334] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.586766 containerd[1709]: 2024-12-13 13:33:22.464 [INFO][5334] ipam/ipam.go 489: Trying affinity for 192.168.9.0/26 host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.586766 containerd[1709]: 2024-12-13 13:33:22.467 [INFO][5334] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.0/26 host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.586766 containerd[1709]: 2024-12-13 13:33:22.471 [INFO][5334] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.0/26 host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.586766 containerd[1709]: 2024-12-13 13:33:22.471 [INFO][5334] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.0/26 handle="k8s-pod-network.f34233503d18588cd481c3a2362867e34b2ab2ff9cd615e4ae128b473949c99d" host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.586766 containerd[1709]: 2024-12-13 13:33:22.474 [INFO][5334] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f34233503d18588cd481c3a2362867e34b2ab2ff9cd615e4ae128b473949c99d Dec 13 13:33:22.586766 containerd[1709]: 2024-12-13 13:33:22.482 [INFO][5334] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.0/26 handle="k8s-pod-network.f34233503d18588cd481c3a2362867e34b2ab2ff9cd615e4ae128b473949c99d" host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.586766 containerd[1709]: 2024-12-13 13:33:22.499 [INFO][5334] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.1/26] block=192.168.9.0/26 handle="k8s-pod-network.f34233503d18588cd481c3a2362867e34b2ab2ff9cd615e4ae128b473949c99d" host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.586766 containerd[1709]: 2024-12-13 13:33:22.499 [INFO][5334] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.1/26] handle="k8s-pod-network.f34233503d18588cd481c3a2362867e34b2ab2ff9cd615e4ae128b473949c99d" host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.586766 containerd[1709]: 2024-12-13 13:33:22.499 [INFO][5334] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:33:22.586766 containerd[1709]: 2024-12-13 13:33:22.499 [INFO][5334] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.1/26] IPv6=[] ContainerID="f34233503d18588cd481c3a2362867e34b2ab2ff9cd615e4ae128b473949c99d" HandleID="k8s-pod-network.f34233503d18588cd481c3a2362867e34b2ab2ff9cd615e4ae128b473949c99d" Workload="ci--4186.0.0--a--6a956dd616-k8s-csi--node--driver--l7zsr-eth0" Dec 13 13:33:22.587675 containerd[1709]: 2024-12-13 13:33:22.513 [INFO][5312] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f34233503d18588cd481c3a2362867e34b2ab2ff9cd615e4ae128b473949c99d" Namespace="calico-system" Pod="csi-node-driver-l7zsr" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-csi--node--driver--l7zsr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--6a956dd616-k8s-csi--node--driver--l7zsr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5e38b74a-209a-4cd3-be7c-117000f59938", ResourceVersion:"643", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 32, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-6a956dd616", ContainerID:"", Pod:"csi-node-driver-l7zsr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali124de79cc83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:33:22.587675 containerd[1709]: 2024-12-13 13:33:22.513 [INFO][5312] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.1/32] ContainerID="f34233503d18588cd481c3a2362867e34b2ab2ff9cd615e4ae128b473949c99d" Namespace="calico-system" Pod="csi-node-driver-l7zsr" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-csi--node--driver--l7zsr-eth0" Dec 13 13:33:22.587675 containerd[1709]: 2024-12-13 13:33:22.513 [INFO][5312] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali124de79cc83 ContainerID="f34233503d18588cd481c3a2362867e34b2ab2ff9cd615e4ae128b473949c99d" Namespace="calico-system" Pod="csi-node-driver-l7zsr" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-csi--node--driver--l7zsr-eth0" Dec 13 13:33:22.587675 containerd[1709]: 2024-12-13 13:33:22.547 [INFO][5312] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f34233503d18588cd481c3a2362867e34b2ab2ff9cd615e4ae128b473949c99d" Namespace="calico-system" Pod="csi-node-driver-l7zsr" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-csi--node--driver--l7zsr-eth0" Dec 13 13:33:22.587675 containerd[1709]: 2024-12-13 13:33:22.552 [INFO][5312] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f34233503d18588cd481c3a2362867e34b2ab2ff9cd615e4ae128b473949c99d" Namespace="calico-system" Pod="csi-node-driver-l7zsr" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-csi--node--driver--l7zsr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--6a956dd616-k8s-csi--node--driver--l7zsr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5e38b74a-209a-4cd3-be7c-117000f59938", ResourceVersion:"643", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 32, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-6a956dd616", ContainerID:"f34233503d18588cd481c3a2362867e34b2ab2ff9cd615e4ae128b473949c99d", Pod:"csi-node-driver-l7zsr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali124de79cc83", MAC:"46:0b:4e:83:2b:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:33:22.587675 containerd[1709]: 2024-12-13 13:33:22.579 [INFO][5312] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f34233503d18588cd481c3a2362867e34b2ab2ff9cd615e4ae128b473949c99d" Namespace="calico-system" Pod="csi-node-driver-l7zsr" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-csi--node--driver--l7zsr-eth0" Dec 13 13:33:22.627945 systemd[1]: run-netns-cni\x2d09833f92\x2d69d1\x2d27e8\x2d7489\x2de7854f0f9563.mount: Deactivated successfully. Dec 13 13:33:22.628057 systemd[1]: run-netns-cni\x2df1bfd245\x2d85a0\x2df2c3\x2da0be\x2dfd92819a352d.mount: Deactivated successfully. Dec 13 13:33:22.701497 systemd-networkd[1499]: cali1c4477f8cb9: Link UP Dec 13 13:33:22.702940 systemd-networkd[1499]: cali1c4477f8cb9: Gained carrier Dec 13 13:33:22.707540 containerd[1709]: time="2024-12-13T13:33:22.707344342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:22.707540 containerd[1709]: time="2024-12-13T13:33:22.707434344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:22.708228 containerd[1709]: time="2024-12-13T13:33:22.707459345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:22.711471 containerd[1709]: time="2024-12-13T13:33:22.710777532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:22.747416 containerd[1709]: 2024-12-13 13:33:22.425 [INFO][5325] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:33:22.747416 containerd[1709]: 2024-12-13 13:33:22.443 [INFO][5325] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.0.0--a--6a956dd616-k8s-calico--apiserver--c5f78578d--dhdkn-eth0 calico-apiserver-c5f78578d- calico-apiserver e90719d9-2bf9-4651-a5b8-e332bf6846fe 763 0 2024-12-13 13:32:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c5f78578d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4186.0.0-a-6a956dd616 calico-apiserver-c5f78578d-dhdkn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1c4477f8cb9 [] []}} ContainerID="0cb6694f317f90ebaec302b3d33f8d167e6bbee7e9e0356fe4af7ca916d1bd57" Namespace="calico-apiserver" Pod="calico-apiserver-c5f78578d-dhdkn" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-calico--apiserver--c5f78578d--dhdkn-" Dec 13 13:33:22.747416 containerd[1709]: 2024-12-13 13:33:22.443 [INFO][5325] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0cb6694f317f90ebaec302b3d33f8d167e6bbee7e9e0356fe4af7ca916d1bd57" Namespace="calico-apiserver" Pod="calico-apiserver-c5f78578d-dhdkn" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-calico--apiserver--c5f78578d--dhdkn-eth0" Dec 13 13:33:22.747416 containerd[1709]: 2024-12-13 13:33:22.565 [INFO][5344] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0cb6694f317f90ebaec302b3d33f8d167e6bbee7e9e0356fe4af7ca916d1bd57" HandleID="k8s-pod-network.0cb6694f317f90ebaec302b3d33f8d167e6bbee7e9e0356fe4af7ca916d1bd57" Workload="ci--4186.0.0--a--6a956dd616-k8s-calico--apiserver--c5f78578d--dhdkn-eth0" Dec 13 13:33:22.747416 containerd[1709]: 2024-12-13 13:33:22.599 [INFO][5344] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0cb6694f317f90ebaec302b3d33f8d167e6bbee7e9e0356fe4af7ca916d1bd57" HandleID="k8s-pod-network.0cb6694f317f90ebaec302b3d33f8d167e6bbee7e9e0356fe4af7ca916d1bd57" Workload="ci--4186.0.0--a--6a956dd616-k8s-calico--apiserver--c5f78578d--dhdkn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a4550), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4186.0.0-a-6a956dd616", "pod":"calico-apiserver-c5f78578d-dhdkn", "timestamp":"2024-12-13 13:33:22.565482608 +0000 UTC"}, Hostname:"ci-4186.0.0-a-6a956dd616", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:33:22.747416 containerd[1709]: 2024-12-13 13:33:22.599 [INFO][5344] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:33:22.747416 containerd[1709]: 2024-12-13 13:33:22.599 [INFO][5344] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:33:22.747416 containerd[1709]: 2024-12-13 13:33:22.599 [INFO][5344] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.0.0-a-6a956dd616' Dec 13 13:33:22.747416 containerd[1709]: 2024-12-13 13:33:22.603 [INFO][5344] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0cb6694f317f90ebaec302b3d33f8d167e6bbee7e9e0356fe4af7ca916d1bd57" host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.747416 containerd[1709]: 2024-12-13 13:33:22.612 [INFO][5344] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.747416 containerd[1709]: 2024-12-13 13:33:22.630 [INFO][5344] ipam/ipam.go 489: Trying affinity for 192.168.9.0/26 host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.747416 containerd[1709]: 2024-12-13 13:33:22.632 [INFO][5344] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.0/26 host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.747416 containerd[1709]: 2024-12-13 13:33:22.634 [INFO][5344] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.0/26 host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.747416 containerd[1709]: 2024-12-13 13:33:22.634 [INFO][5344] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.0/26 handle="k8s-pod-network.0cb6694f317f90ebaec302b3d33f8d167e6bbee7e9e0356fe4af7ca916d1bd57" host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.747416 containerd[1709]: 2024-12-13 13:33:22.636 [INFO][5344] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0cb6694f317f90ebaec302b3d33f8d167e6bbee7e9e0356fe4af7ca916d1bd57 Dec 13 13:33:22.747416 containerd[1709]: 2024-12-13 13:33:22.655 [INFO][5344] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.0/26 handle="k8s-pod-network.0cb6694f317f90ebaec302b3d33f8d167e6bbee7e9e0356fe4af7ca916d1bd57" host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.747416 containerd[1709]: 2024-12-13 13:33:22.673 [INFO][5344] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.2/26] block=192.168.9.0/26 handle="k8s-pod-network.0cb6694f317f90ebaec302b3d33f8d167e6bbee7e9e0356fe4af7ca916d1bd57" host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.747416 containerd[1709]: 2024-12-13 13:33:22.673 [INFO][5344] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.2/26] handle="k8s-pod-network.0cb6694f317f90ebaec302b3d33f8d167e6bbee7e9e0356fe4af7ca916d1bd57" host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.747416 containerd[1709]: 2024-12-13 13:33:22.673 [INFO][5344] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:33:22.747416 containerd[1709]: 2024-12-13 13:33:22.673 [INFO][5344] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.2/26] IPv6=[] ContainerID="0cb6694f317f90ebaec302b3d33f8d167e6bbee7e9e0356fe4af7ca916d1bd57" HandleID="k8s-pod-network.0cb6694f317f90ebaec302b3d33f8d167e6bbee7e9e0356fe4af7ca916d1bd57" Workload="ci--4186.0.0--a--6a956dd616-k8s-calico--apiserver--c5f78578d--dhdkn-eth0" Dec 13 13:33:22.748837 containerd[1709]: 2024-12-13 13:33:22.689 [INFO][5325] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0cb6694f317f90ebaec302b3d33f8d167e6bbee7e9e0356fe4af7ca916d1bd57" Namespace="calico-apiserver" Pod="calico-apiserver-c5f78578d-dhdkn" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-calico--apiserver--c5f78578d--dhdkn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--6a956dd616-k8s-calico--apiserver--c5f78578d--dhdkn-eth0", GenerateName:"calico-apiserver-c5f78578d-", Namespace:"calico-apiserver", SelfLink:"", UID:"e90719d9-2bf9-4651-a5b8-e332bf6846fe", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 32, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c5f78578d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-6a956dd616", ContainerID:"", Pod:"calico-apiserver-c5f78578d-dhdkn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1c4477f8cb9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:33:22.748837 containerd[1709]: 2024-12-13 13:33:22.690 [INFO][5325] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.2/32] ContainerID="0cb6694f317f90ebaec302b3d33f8d167e6bbee7e9e0356fe4af7ca916d1bd57" Namespace="calico-apiserver" Pod="calico-apiserver-c5f78578d-dhdkn" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-calico--apiserver--c5f78578d--dhdkn-eth0" Dec 13 13:33:22.748837 containerd[1709]: 2024-12-13 13:33:22.690 [INFO][5325] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c4477f8cb9 ContainerID="0cb6694f317f90ebaec302b3d33f8d167e6bbee7e9e0356fe4af7ca916d1bd57" Namespace="calico-apiserver" Pod="calico-apiserver-c5f78578d-dhdkn" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-calico--apiserver--c5f78578d--dhdkn-eth0" Dec 13 13:33:22.748837 containerd[1709]: 2024-12-13 13:33:22.706 [INFO][5325] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0cb6694f317f90ebaec302b3d33f8d167e6bbee7e9e0356fe4af7ca916d1bd57" Namespace="calico-apiserver" Pod="calico-apiserver-c5f78578d-dhdkn" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-calico--apiserver--c5f78578d--dhdkn-eth0" Dec 13 13:33:22.748837 containerd[1709]: 2024-12-13 13:33:22.711 [INFO][5325] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0cb6694f317f90ebaec302b3d33f8d167e6bbee7e9e0356fe4af7ca916d1bd57" Namespace="calico-apiserver" Pod="calico-apiserver-c5f78578d-dhdkn" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-calico--apiserver--c5f78578d--dhdkn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--6a956dd616-k8s-calico--apiserver--c5f78578d--dhdkn-eth0", GenerateName:"calico-apiserver-c5f78578d-", Namespace:"calico-apiserver", SelfLink:"", UID:"e90719d9-2bf9-4651-a5b8-e332bf6846fe", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 32, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c5f78578d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-6a956dd616", ContainerID:"0cb6694f317f90ebaec302b3d33f8d167e6bbee7e9e0356fe4af7ca916d1bd57", Pod:"calico-apiserver-c5f78578d-dhdkn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1c4477f8cb9", MAC:"b2:87:8b:52:df:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:33:22.748837 containerd[1709]: 2024-12-13 13:33:22.742 [INFO][5325] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0cb6694f317f90ebaec302b3d33f8d167e6bbee7e9e0356fe4af7ca916d1bd57" Namespace="calico-apiserver" Pod="calico-apiserver-c5f78578d-dhdkn" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-calico--apiserver--c5f78578d--dhdkn-eth0" Dec 13 13:33:22.783355 systemd[1]: run-containerd-runc-k8s.io-f34233503d18588cd481c3a2362867e34b2ab2ff9cd615e4ae128b473949c99d-runc.Hceru8.mount: Deactivated successfully. Dec 13 13:33:22.796917 systemd[1]: Started cri-containerd-f34233503d18588cd481c3a2362867e34b2ab2ff9cd615e4ae128b473949c99d.scope - libcontainer container f34233503d18588cd481c3a2362867e34b2ab2ff9cd615e4ae128b473949c99d. Dec 13 13:33:22.865584 systemd-networkd[1499]: cali3c0e292e2ee: Link UP Dec 13 13:33:22.866904 systemd-networkd[1499]: cali3c0e292e2ee: Gained carrier Dec 13 13:33:22.894949 containerd[1709]: time="2024-12-13T13:33:22.894580866Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:22.899795 containerd[1709]: time="2024-12-13T13:33:22.898738075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:22.899795 containerd[1709]: time="2024-12-13T13:33:22.898805477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:22.899795 containerd[1709]: time="2024-12-13T13:33:22.898888979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:22.903688 containerd[1709]: 2024-12-13 13:33:22.533 [INFO][5343] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:33:22.903688 containerd[1709]: 2024-12-13 13:33:22.565 [INFO][5343] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.0.0--a--6a956dd616-k8s-coredns--7db6d8ff4d--nxst4-eth0 coredns-7db6d8ff4d- kube-system d48882ed-a3fb-4cc6-a051-3acab30e260b 764 0 2024-12-13 13:32:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4186.0.0-a-6a956dd616 coredns-7db6d8ff4d-nxst4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3c0e292e2ee [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2216e9a9e9a0201d6e0e519f4ed2a084e77e8c1c57336c9fd6469f7f8bc9c9dc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nxst4" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-coredns--7db6d8ff4d--nxst4-" Dec 13 13:33:22.903688 containerd[1709]: 2024-12-13 13:33:22.565 [INFO][5343] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2216e9a9e9a0201d6e0e519f4ed2a084e77e8c1c57336c9fd6469f7f8bc9c9dc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nxst4" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-coredns--7db6d8ff4d--nxst4-eth0" Dec 13 13:33:22.903688 containerd[1709]: 2024-12-13 13:33:22.751 [INFO][5410] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2216e9a9e9a0201d6e0e519f4ed2a084e77e8c1c57336c9fd6469f7f8bc9c9dc" HandleID="k8s-pod-network.2216e9a9e9a0201d6e0e519f4ed2a084e77e8c1c57336c9fd6469f7f8bc9c9dc" Workload="ci--4186.0.0--a--6a956dd616-k8s-coredns--7db6d8ff4d--nxst4-eth0" Dec 13 13:33:22.903688 containerd[1709]: 2024-12-13 13:33:22.768 [INFO][5410] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2216e9a9e9a0201d6e0e519f4ed2a084e77e8c1c57336c9fd6469f7f8bc9c9dc" HandleID="k8s-pod-network.2216e9a9e9a0201d6e0e519f4ed2a084e77e8c1c57336c9fd6469f7f8bc9c9dc" Workload="ci--4186.0.0--a--6a956dd616-k8s-coredns--7db6d8ff4d--nxst4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033bd80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4186.0.0-a-6a956dd616", "pod":"coredns-7db6d8ff4d-nxst4", "timestamp":"2024-12-13 13:33:22.751868413 +0000 UTC"}, Hostname:"ci-4186.0.0-a-6a956dd616", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:33:22.903688 containerd[1709]: 2024-12-13 13:33:22.768 [INFO][5410] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:33:22.903688 containerd[1709]: 2024-12-13 13:33:22.768 [INFO][5410] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:33:22.903688 containerd[1709]: 2024-12-13 13:33:22.768 [INFO][5410] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.0.0-a-6a956dd616' Dec 13 13:33:22.903688 containerd[1709]: 2024-12-13 13:33:22.776 [INFO][5410] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2216e9a9e9a0201d6e0e519f4ed2a084e77e8c1c57336c9fd6469f7f8bc9c9dc" host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.903688 containerd[1709]: 2024-12-13 13:33:22.786 [INFO][5410] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.903688 containerd[1709]: 2024-12-13 13:33:22.803 [INFO][5410] ipam/ipam.go 489: Trying affinity for 192.168.9.0/26 host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.903688 containerd[1709]: 2024-12-13 13:33:22.813 [INFO][5410] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.0/26 host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.903688 containerd[1709]: 2024-12-13 13:33:22.816 [INFO][5410] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.0/26 host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.903688 containerd[1709]: 2024-12-13 13:33:22.816 [INFO][5410] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.0/26 handle="k8s-pod-network.2216e9a9e9a0201d6e0e519f4ed2a084e77e8c1c57336c9fd6469f7f8bc9c9dc" host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.903688 containerd[1709]: 2024-12-13 13:33:22.818 [INFO][5410] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2216e9a9e9a0201d6e0e519f4ed2a084e77e8c1c57336c9fd6469f7f8bc9c9dc Dec 13 13:33:22.903688 containerd[1709]: 2024-12-13 13:33:22.827 [INFO][5410] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.0/26 handle="k8s-pod-network.2216e9a9e9a0201d6e0e519f4ed2a084e77e8c1c57336c9fd6469f7f8bc9c9dc" host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.903688 containerd[1709]: 2024-12-13 13:33:22.844 [INFO][5410] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.3/26] block=192.168.9.0/26 handle="k8s-pod-network.2216e9a9e9a0201d6e0e519f4ed2a084e77e8c1c57336c9fd6469f7f8bc9c9dc" host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.903688 containerd[1709]: 2024-12-13 13:33:22.844 [INFO][5410] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.3/26] handle="k8s-pod-network.2216e9a9e9a0201d6e0e519f4ed2a084e77e8c1c57336c9fd6469f7f8bc9c9dc" host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.903688 containerd[1709]: 2024-12-13 13:33:22.844 [INFO][5410] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:33:22.903688 containerd[1709]: 2024-12-13 13:33:22.844 [INFO][5410] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.3/26] IPv6=[] ContainerID="2216e9a9e9a0201d6e0e519f4ed2a084e77e8c1c57336c9fd6469f7f8bc9c9dc" HandleID="k8s-pod-network.2216e9a9e9a0201d6e0e519f4ed2a084e77e8c1c57336c9fd6469f7f8bc9c9dc" Workload="ci--4186.0.0--a--6a956dd616-k8s-coredns--7db6d8ff4d--nxst4-eth0" Dec 13 13:33:22.904654 containerd[1709]: 2024-12-13 13:33:22.848 [INFO][5343] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2216e9a9e9a0201d6e0e519f4ed2a084e77e8c1c57336c9fd6469f7f8bc9c9dc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nxst4" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-coredns--7db6d8ff4d--nxst4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--6a956dd616-k8s-coredns--7db6d8ff4d--nxst4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d48882ed-a3fb-4cc6-a051-3acab30e260b", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 32, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-6a956dd616", ContainerID:"", Pod:"coredns-7db6d8ff4d-nxst4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c0e292e2ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:33:22.904654 containerd[1709]: 2024-12-13 13:33:22.850 [INFO][5343] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.3/32] ContainerID="2216e9a9e9a0201d6e0e519f4ed2a084e77e8c1c57336c9fd6469f7f8bc9c9dc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nxst4" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-coredns--7db6d8ff4d--nxst4-eth0" Dec 13 13:33:22.904654 containerd[1709]: 2024-12-13 13:33:22.851 [INFO][5343] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c0e292e2ee ContainerID="2216e9a9e9a0201d6e0e519f4ed2a084e77e8c1c57336c9fd6469f7f8bc9c9dc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nxst4" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-coredns--7db6d8ff4d--nxst4-eth0" Dec 13 13:33:22.904654 containerd[1709]: 2024-12-13 13:33:22.865 [INFO][5343] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2216e9a9e9a0201d6e0e519f4ed2a084e77e8c1c57336c9fd6469f7f8bc9c9dc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nxst4" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-coredns--7db6d8ff4d--nxst4-eth0" Dec 13 13:33:22.904654 containerd[1709]: 2024-12-13 13:33:22.865 [INFO][5343] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2216e9a9e9a0201d6e0e519f4ed2a084e77e8c1c57336c9fd6469f7f8bc9c9dc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nxst4" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-coredns--7db6d8ff4d--nxst4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--6a956dd616-k8s-coredns--7db6d8ff4d--nxst4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d48882ed-a3fb-4cc6-a051-3acab30e260b", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 32, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-6a956dd616", ContainerID:"2216e9a9e9a0201d6e0e519f4ed2a084e77e8c1c57336c9fd6469f7f8bc9c9dc", Pod:"coredns-7db6d8ff4d-nxst4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c0e292e2ee", MAC:"ba:f5:92:4b:73:99", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:33:22.904654 containerd[1709]: 2024-12-13 13:33:22.892 [INFO][5343] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2216e9a9e9a0201d6e0e519f4ed2a084e77e8c1c57336c9fd6469f7f8bc9c9dc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nxst4" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-coredns--7db6d8ff4d--nxst4-eth0" Dec 13 13:33:22.924416 containerd[1709]: time="2024-12-13T13:33:22.924353448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7zsr,Uid:5e38b74a-209a-4cd3-be7c-117000f59938,Namespace:calico-system,Attempt:5,} returns sandbox id \"f34233503d18588cd481c3a2362867e34b2ab2ff9cd615e4ae128b473949c99d\"" Dec 13 13:33:22.926695 containerd[1709]: time="2024-12-13T13:33:22.926657708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 13:33:22.934709 systemd-networkd[1499]: cali0b0c3e7c4d6: Link UP Dec 13 13:33:22.936623 systemd-networkd[1499]: cali0b0c3e7c4d6: Gained carrier Dec 13 13:33:22.964439 containerd[1709]: 2024-12-13 13:33:22.629 [INFO][5375] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:33:22.964439 containerd[1709]: 2024-12-13 13:33:22.666 [INFO][5375] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.0.0--a--6a956dd616-k8s-calico--kube--controllers--5cbfd9d889--rxm9h-eth0 calico-kube-controllers-5cbfd9d889- calico-system 0c845263-e633-4055-81f9-4aa28ad32b74 759 0 2024-12-13 13:32:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5cbfd9d889 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4186.0.0-a-6a956dd616 calico-kube-controllers-5cbfd9d889-rxm9h eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0b0c3e7c4d6 [] []}} ContainerID="c18ae31a2a3015a4372481875897d1840c557354f5e6da8b68f59b88bd27e7d9" Namespace="calico-system" Pod="calico-kube-controllers-5cbfd9d889-rxm9h" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-calico--kube--controllers--5cbfd9d889--rxm9h-" Dec 13 13:33:22.964439 containerd[1709]: 2024-12-13 13:33:22.666 [INFO][5375] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c18ae31a2a3015a4372481875897d1840c557354f5e6da8b68f59b88bd27e7d9" Namespace="calico-system" Pod="calico-kube-controllers-5cbfd9d889-rxm9h" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-calico--kube--controllers--5cbfd9d889--rxm9h-eth0" Dec 13 13:33:22.964439 containerd[1709]: 2024-12-13 13:33:22.821 [INFO][5431] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c18ae31a2a3015a4372481875897d1840c557354f5e6da8b68f59b88bd27e7d9" HandleID="k8s-pod-network.c18ae31a2a3015a4372481875897d1840c557354f5e6da8b68f59b88bd27e7d9" Workload="ci--4186.0.0--a--6a956dd616-k8s-calico--kube--controllers--5cbfd9d889--rxm9h-eth0" Dec 13 13:33:22.964439 containerd[1709]: 2024-12-13 13:33:22.850 [INFO][5431] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c18ae31a2a3015a4372481875897d1840c557354f5e6da8b68f59b88bd27e7d9" HandleID="k8s-pod-network.c18ae31a2a3015a4372481875897d1840c557354f5e6da8b68f59b88bd27e7d9" Workload="ci--4186.0.0--a--6a956dd616-k8s-calico--kube--controllers--5cbfd9d889--rxm9h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334ee0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4186.0.0-a-6a956dd616", "pod":"calico-kube-controllers-5cbfd9d889-rxm9h", "timestamp":"2024-12-13 13:33:22.821152737 +0000 UTC"}, Hostname:"ci-4186.0.0-a-6a956dd616", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:33:22.964439 containerd[1709]: 2024-12-13 13:33:22.850 [INFO][5431] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:33:22.964439 containerd[1709]: 2024-12-13 13:33:22.850 [INFO][5431] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:33:22.964439 containerd[1709]: 2024-12-13 13:33:22.850 [INFO][5431] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.0.0-a-6a956dd616' Dec 13 13:33:22.964439 containerd[1709]: 2024-12-13 13:33:22.855 [INFO][5431] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c18ae31a2a3015a4372481875897d1840c557354f5e6da8b68f59b88bd27e7d9" host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.964439 containerd[1709]: 2024-12-13 13:33:22.860 [INFO][5431] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.964439 containerd[1709]: 2024-12-13 13:33:22.868 [INFO][5431] ipam/ipam.go 489: Trying affinity for 192.168.9.0/26 host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.964439 containerd[1709]: 2024-12-13 13:33:22.870 [INFO][5431] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.0/26 host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.964439 containerd[1709]: 2024-12-13 13:33:22.879 [INFO][5431] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.0/26 host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.964439 containerd[1709]: 2024-12-13 13:33:22.879 [INFO][5431] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.0/26 handle="k8s-pod-network.c18ae31a2a3015a4372481875897d1840c557354f5e6da8b68f59b88bd27e7d9" host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.964439 containerd[1709]: 2024-12-13 13:33:22.883 [INFO][5431] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c18ae31a2a3015a4372481875897d1840c557354f5e6da8b68f59b88bd27e7d9 Dec 13 13:33:22.964439 containerd[1709]: 2024-12-13 13:33:22.894 [INFO][5431] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.0/26 handle="k8s-pod-network.c18ae31a2a3015a4372481875897d1840c557354f5e6da8b68f59b88bd27e7d9" host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.964439 containerd[1709]: 2024-12-13 13:33:22.915 [INFO][5431] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.4/26] block=192.168.9.0/26 handle="k8s-pod-network.c18ae31a2a3015a4372481875897d1840c557354f5e6da8b68f59b88bd27e7d9" host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.964439 containerd[1709]: 2024-12-13 13:33:22.916 [INFO][5431] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.4/26] handle="k8s-pod-network.c18ae31a2a3015a4372481875897d1840c557354f5e6da8b68f59b88bd27e7d9" host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:22.964439 containerd[1709]: 2024-12-13 13:33:22.917 [INFO][5431] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:33:22.964439 containerd[1709]: 2024-12-13 13:33:22.917 [INFO][5431] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.4/26] IPv6=[] ContainerID="c18ae31a2a3015a4372481875897d1840c557354f5e6da8b68f59b88bd27e7d9" HandleID="k8s-pod-network.c18ae31a2a3015a4372481875897d1840c557354f5e6da8b68f59b88bd27e7d9" Workload="ci--4186.0.0--a--6a956dd616-k8s-calico--kube--controllers--5cbfd9d889--rxm9h-eth0" Dec 13 13:33:22.965430 containerd[1709]: 2024-12-13 13:33:22.925 [INFO][5375] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c18ae31a2a3015a4372481875897d1840c557354f5e6da8b68f59b88bd27e7d9" Namespace="calico-system" Pod="calico-kube-controllers-5cbfd9d889-rxm9h" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-calico--kube--controllers--5cbfd9d889--rxm9h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--6a956dd616-k8s-calico--kube--controllers--5cbfd9d889--rxm9h-eth0", GenerateName:"calico-kube-controllers-5cbfd9d889-", Namespace:"calico-system", SelfLink:"", UID:"0c845263-e633-4055-81f9-4aa28ad32b74", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 32, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cbfd9d889", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-6a956dd616", ContainerID:"", Pod:"calico-kube-controllers-5cbfd9d889-rxm9h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0b0c3e7c4d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:33:22.965430 containerd[1709]: 2024-12-13 13:33:22.928 [INFO][5375] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.4/32] ContainerID="c18ae31a2a3015a4372481875897d1840c557354f5e6da8b68f59b88bd27e7d9" Namespace="calico-system" Pod="calico-kube-controllers-5cbfd9d889-rxm9h" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-calico--kube--controllers--5cbfd9d889--rxm9h-eth0" Dec 13 13:33:22.965430 containerd[1709]: 2024-12-13 13:33:22.928 [INFO][5375] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b0c3e7c4d6 ContainerID="c18ae31a2a3015a4372481875897d1840c557354f5e6da8b68f59b88bd27e7d9" Namespace="calico-system" Pod="calico-kube-controllers-5cbfd9d889-rxm9h" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-calico--kube--controllers--5cbfd9d889--rxm9h-eth0" Dec 13 13:33:22.965430 containerd[1709]: 2024-12-13 13:33:22.934 [INFO][5375] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c18ae31a2a3015a4372481875897d1840c557354f5e6da8b68f59b88bd27e7d9" Namespace="calico-system" Pod="calico-kube-controllers-5cbfd9d889-rxm9h" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-calico--kube--controllers--5cbfd9d889--rxm9h-eth0" Dec 13 13:33:22.965430 containerd[1709]: 2024-12-13 13:33:22.934 [INFO][5375] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c18ae31a2a3015a4372481875897d1840c557354f5e6da8b68f59b88bd27e7d9" Namespace="calico-system" Pod="calico-kube-controllers-5cbfd9d889-rxm9h" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-calico--kube--controllers--5cbfd9d889--rxm9h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--6a956dd616-k8s-calico--kube--controllers--5cbfd9d889--rxm9h-eth0", GenerateName:"calico-kube-controllers-5cbfd9d889-", Namespace:"calico-system", SelfLink:"", UID:"0c845263-e633-4055-81f9-4aa28ad32b74", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 32, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cbfd9d889", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-6a956dd616", ContainerID:"c18ae31a2a3015a4372481875897d1840c557354f5e6da8b68f59b88bd27e7d9", Pod:"calico-kube-controllers-5cbfd9d889-rxm9h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0b0c3e7c4d6", MAC:"2e:7c:5d:4d:33:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:33:22.965430 containerd[1709]: 2024-12-13 13:33:22.954 [INFO][5375] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c18ae31a2a3015a4372481875897d1840c557354f5e6da8b68f59b88bd27e7d9" Namespace="calico-system" Pod="calico-kube-controllers-5cbfd9d889-rxm9h" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-calico--kube--controllers--5cbfd9d889--rxm9h-eth0" Dec 13 13:33:22.984126 systemd[1]: Started cri-containerd-0cb6694f317f90ebaec302b3d33f8d167e6bbee7e9e0356fe4af7ca916d1bd57.scope - libcontainer container 0cb6694f317f90ebaec302b3d33f8d167e6bbee7e9e0356fe4af7ca916d1bd57. Dec 13 13:33:23.004467 containerd[1709]: time="2024-12-13T13:33:22.994531191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:23.004467 containerd[1709]: time="2024-12-13T13:33:22.994594093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:23.004467 containerd[1709]: time="2024-12-13T13:33:22.994613194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:23.004467 containerd[1709]: time="2024-12-13T13:33:22.994698996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:23.023458 systemd-networkd[1499]: cali60779944b48: Link UP Dec 13 13:33:23.031999 systemd-networkd[1499]: cali60779944b48: Gained carrier Dec 13 13:33:23.055975 systemd[1]: Started cri-containerd-2216e9a9e9a0201d6e0e519f4ed2a084e77e8c1c57336c9fd6469f7f8bc9c9dc.scope - libcontainer container 2216e9a9e9a0201d6e0e519f4ed2a084e77e8c1c57336c9fd6469f7f8bc9c9dc. Dec 13 13:33:23.067019 containerd[1709]: 2024-12-13 13:33:22.616 [INFO][5366] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:33:23.067019 containerd[1709]: 2024-12-13 13:33:22.655 [INFO][5366] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.0.0--a--6a956dd616-k8s-coredns--7db6d8ff4d--88gf8-eth0 coredns-7db6d8ff4d- kube-system 3098ed4c-c400-4c97-958d-d1930afff8ed 762 0 2024-12-13 13:32:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4186.0.0-a-6a956dd616 coredns-7db6d8ff4d-88gf8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali60779944b48 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="df2147eaad933eea9d0f1024eb20e4379b2d7759cd2dabfebebe58648e5bf41b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-88gf8" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-coredns--7db6d8ff4d--88gf8-" Dec 13 13:33:23.067019 containerd[1709]: 2024-12-13 13:33:22.655 [INFO][5366] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="df2147eaad933eea9d0f1024eb20e4379b2d7759cd2dabfebebe58648e5bf41b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-88gf8" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-coredns--7db6d8ff4d--88gf8-eth0" Dec 13 13:33:23.067019 containerd[1709]: 2024-12-13 13:33:22.819 [INFO][5430] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df2147eaad933eea9d0f1024eb20e4379b2d7759cd2dabfebebe58648e5bf41b" HandleID="k8s-pod-network.df2147eaad933eea9d0f1024eb20e4379b2d7759cd2dabfebebe58648e5bf41b" Workload="ci--4186.0.0--a--6a956dd616-k8s-coredns--7db6d8ff4d--88gf8-eth0" Dec 13 13:33:23.067019 containerd[1709]: 2024-12-13 13:33:22.853 [INFO][5430] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="df2147eaad933eea9d0f1024eb20e4379b2d7759cd2dabfebebe58648e5bf41b" HandleID="k8s-pod-network.df2147eaad933eea9d0f1024eb20e4379b2d7759cd2dabfebebe58648e5bf41b" Workload="ci--4186.0.0--a--6a956dd616-k8s-coredns--7db6d8ff4d--88gf8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000121740), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4186.0.0-a-6a956dd616", "pod":"coredns-7db6d8ff4d-88gf8", "timestamp":"2024-12-13 13:33:22.819502993 +0000 UTC"}, Hostname:"ci-4186.0.0-a-6a956dd616", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:33:23.067019 containerd[1709]: 2024-12-13 13:33:22.853 [INFO][5430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:33:23.067019 containerd[1709]: 2024-12-13 13:33:22.916 [INFO][5430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:33:23.067019 containerd[1709]: 2024-12-13 13:33:22.916 [INFO][5430] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.0.0-a-6a956dd616' Dec 13 13:33:23.067019 containerd[1709]: 2024-12-13 13:33:22.921 [INFO][5430] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.df2147eaad933eea9d0f1024eb20e4379b2d7759cd2dabfebebe58648e5bf41b" host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:23.067019 containerd[1709]: 2024-12-13 13:33:22.932 [INFO][5430] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:23.067019 containerd[1709]: 2024-12-13 13:33:22.954 [INFO][5430] ipam/ipam.go 489: Trying affinity for 192.168.9.0/26 host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:23.067019 containerd[1709]: 2024-12-13 13:33:22.958 [INFO][5430] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.0/26 host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:23.067019 containerd[1709]: 2024-12-13 13:33:22.961 [INFO][5430] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.0/26 host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:23.067019 containerd[1709]: 2024-12-13 13:33:22.962 [INFO][5430] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.0/26 handle="k8s-pod-network.df2147eaad933eea9d0f1024eb20e4379b2d7759cd2dabfebebe58648e5bf41b" host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:23.067019 containerd[1709]: 2024-12-13 13:33:22.964 [INFO][5430] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.df2147eaad933eea9d0f1024eb20e4379b2d7759cd2dabfebebe58648e5bf41b Dec 13 13:33:23.067019 containerd[1709]: 2024-12-13 13:33:22.971 [INFO][5430] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.0/26 handle="k8s-pod-network.df2147eaad933eea9d0f1024eb20e4379b2d7759cd2dabfebebe58648e5bf41b" host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:23.067019 containerd[1709]: 2024-12-13 13:33:22.987 [INFO][5430] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.5/26] block=192.168.9.0/26 handle="k8s-pod-network.df2147eaad933eea9d0f1024eb20e4379b2d7759cd2dabfebebe58648e5bf41b" host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:23.067019 containerd[1709]: 2024-12-13 13:33:22.987 [INFO][5430] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.5/26] handle="k8s-pod-network.df2147eaad933eea9d0f1024eb20e4379b2d7759cd2dabfebebe58648e5bf41b" host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:23.067019 containerd[1709]: 2024-12-13 13:33:22.987 [INFO][5430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:33:23.067019 containerd[1709]: 2024-12-13 13:33:22.987 [INFO][5430] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.5/26] IPv6=[] ContainerID="df2147eaad933eea9d0f1024eb20e4379b2d7759cd2dabfebebe58648e5bf41b" HandleID="k8s-pod-network.df2147eaad933eea9d0f1024eb20e4379b2d7759cd2dabfebebe58648e5bf41b" Workload="ci--4186.0.0--a--6a956dd616-k8s-coredns--7db6d8ff4d--88gf8-eth0" Dec 13 13:33:23.069134 containerd[1709]: 2024-12-13 13:33:22.996 [INFO][5366] cni-plugin/k8s.go 386: Populated endpoint ContainerID="df2147eaad933eea9d0f1024eb20e4379b2d7759cd2dabfebebe58648e5bf41b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-88gf8" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-coredns--7db6d8ff4d--88gf8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--6a956dd616-k8s-coredns--7db6d8ff4d--88gf8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3098ed4c-c400-4c97-958d-d1930afff8ed", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 32, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-6a956dd616", ContainerID:"", Pod:"coredns-7db6d8ff4d-88gf8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali60779944b48", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:33:23.069134 containerd[1709]: 2024-12-13 13:33:22.997 [INFO][5366] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.5/32] ContainerID="df2147eaad933eea9d0f1024eb20e4379b2d7759cd2dabfebebe58648e5bf41b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-88gf8" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-coredns--7db6d8ff4d--88gf8-eth0" Dec 13 13:33:23.069134 containerd[1709]: 2024-12-13 13:33:22.997 [INFO][5366] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60779944b48 ContainerID="df2147eaad933eea9d0f1024eb20e4379b2d7759cd2dabfebebe58648e5bf41b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-88gf8" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-coredns--7db6d8ff4d--88gf8-eth0" Dec 13 13:33:23.069134 containerd[1709]: 2024-12-13 13:33:23.033 [INFO][5366] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df2147eaad933eea9d0f1024eb20e4379b2d7759cd2dabfebebe58648e5bf41b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-88gf8" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-coredns--7db6d8ff4d--88gf8-eth0" Dec 13 13:33:23.069134 containerd[1709]: 2024-12-13 13:33:23.038 [INFO][5366] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="df2147eaad933eea9d0f1024eb20e4379b2d7759cd2dabfebebe58648e5bf41b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-88gf8" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-coredns--7db6d8ff4d--88gf8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--6a956dd616-k8s-coredns--7db6d8ff4d--88gf8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3098ed4c-c400-4c97-958d-d1930afff8ed", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 32, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-6a956dd616", ContainerID:"df2147eaad933eea9d0f1024eb20e4379b2d7759cd2dabfebebe58648e5bf41b", Pod:"coredns-7db6d8ff4d-88gf8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali60779944b48", MAC:"d2:d1:f4:13:04:f0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:33:23.069134 containerd[1709]: 2024-12-13 13:33:23.061 [INFO][5366] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="df2147eaad933eea9d0f1024eb20e4379b2d7759cd2dabfebebe58648e5bf41b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-88gf8" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-coredns--7db6d8ff4d--88gf8-eth0" Dec 13 13:33:23.080997 systemd-networkd[1499]: cali1e0d3805531: Link UP Dec 13 13:33:23.081255 systemd-networkd[1499]: cali1e0d3805531: Gained carrier Dec 13 13:33:23.114831 containerd[1709]: 2024-12-13 13:33:22.617 [INFO][5361] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:33:23.114831 containerd[1709]: 2024-12-13 13:33:22.668 [INFO][5361] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.0.0--a--6a956dd616-k8s-calico--apiserver--c5f78578d--x98kx-eth0 calico-apiserver-c5f78578d- calico-apiserver 4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef 761 0 2024-12-13 13:32:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c5f78578d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4186.0.0-a-6a956dd616 calico-apiserver-c5f78578d-x98kx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1e0d3805531 [] []}} ContainerID="824bb88973093641425fb6d340fb5fe956c34bd6860d21656fcefbcadfe7b54c" Namespace="calico-apiserver" Pod="calico-apiserver-c5f78578d-x98kx" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-calico--apiserver--c5f78578d--x98kx-" Dec 13 13:33:23.114831 containerd[1709]: 2024-12-13 13:33:22.668 [INFO][5361] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="824bb88973093641425fb6d340fb5fe956c34bd6860d21656fcefbcadfe7b54c" Namespace="calico-apiserver" Pod="calico-apiserver-c5f78578d-x98kx" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-calico--apiserver--c5f78578d--x98kx-eth0" Dec 13 13:33:23.114831 containerd[1709]: 2024-12-13 13:33:22.839 [INFO][5438] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="824bb88973093641425fb6d340fb5fe956c34bd6860d21656fcefbcadfe7b54c" HandleID="k8s-pod-network.824bb88973093641425fb6d340fb5fe956c34bd6860d21656fcefbcadfe7b54c" Workload="ci--4186.0.0--a--6a956dd616-k8s-calico--apiserver--c5f78578d--x98kx-eth0" Dec 13 13:33:23.114831 containerd[1709]: 2024-12-13 13:33:22.859 [INFO][5438] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="824bb88973093641425fb6d340fb5fe956c34bd6860d21656fcefbcadfe7b54c" HandleID="k8s-pod-network.824bb88973093641425fb6d340fb5fe956c34bd6860d21656fcefbcadfe7b54c" Workload="ci--4186.0.0--a--6a956dd616-k8s-calico--apiserver--c5f78578d--x98kx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4186.0.0-a-6a956dd616", "pod":"calico-apiserver-c5f78578d-x98kx", "timestamp":"2024-12-13 13:33:22.839076407 +0000 UTC"}, Hostname:"ci-4186.0.0-a-6a956dd616", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:33:23.114831 containerd[1709]: 2024-12-13 13:33:22.859 [INFO][5438] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:33:23.114831 containerd[1709]: 2024-12-13 13:33:22.987 [INFO][5438] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:33:23.114831 containerd[1709]: 2024-12-13 13:33:22.988 [INFO][5438] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.0.0-a-6a956dd616' Dec 13 13:33:23.114831 containerd[1709]: 2024-12-13 13:33:22.990 [INFO][5438] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.824bb88973093641425fb6d340fb5fe956c34bd6860d21656fcefbcadfe7b54c" host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:23.114831 containerd[1709]: 2024-12-13 13:33:22.995 [INFO][5438] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:23.114831 containerd[1709]: 2024-12-13 13:33:23.010 [INFO][5438] ipam/ipam.go 489: Trying affinity for 192.168.9.0/26 host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:23.114831 containerd[1709]: 2024-12-13 13:33:23.013 [INFO][5438] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.0/26 host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:23.114831 containerd[1709]: 2024-12-13 13:33:23.016 [INFO][5438] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.0/26 host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:23.114831 containerd[1709]: 2024-12-13 13:33:23.016 [INFO][5438] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.0/26 handle="k8s-pod-network.824bb88973093641425fb6d340fb5fe956c34bd6860d21656fcefbcadfe7b54c" host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:23.114831 containerd[1709]: 2024-12-13 13:33:23.018 [INFO][5438] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.824bb88973093641425fb6d340fb5fe956c34bd6860d21656fcefbcadfe7b54c Dec 13 13:33:23.114831 containerd[1709]: 2024-12-13 13:33:23.035 [INFO][5438] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.0/26 handle="k8s-pod-network.824bb88973093641425fb6d340fb5fe956c34bd6860d21656fcefbcadfe7b54c" host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:23.114831 containerd[1709]: 2024-12-13 13:33:23.066 [INFO][5438] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.6/26] block=192.168.9.0/26 handle="k8s-pod-network.824bb88973093641425fb6d340fb5fe956c34bd6860d21656fcefbcadfe7b54c" host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:23.114831 containerd[1709]: 2024-12-13 13:33:23.066 [INFO][5438] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.6/26] handle="k8s-pod-network.824bb88973093641425fb6d340fb5fe956c34bd6860d21656fcefbcadfe7b54c" host="ci-4186.0.0-a-6a956dd616" Dec 13 13:33:23.114831 containerd[1709]: 2024-12-13 13:33:23.066 [INFO][5438] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:33:23.114831 containerd[1709]: 2024-12-13 13:33:23.066 [INFO][5438] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.6/26] IPv6=[] ContainerID="824bb88973093641425fb6d340fb5fe956c34bd6860d21656fcefbcadfe7b54c" HandleID="k8s-pod-network.824bb88973093641425fb6d340fb5fe956c34bd6860d21656fcefbcadfe7b54c" Workload="ci--4186.0.0--a--6a956dd616-k8s-calico--apiserver--c5f78578d--x98kx-eth0" Dec 13 13:33:23.115730 containerd[1709]: 2024-12-13 13:33:23.071 [INFO][5361] cni-plugin/k8s.go 386: Populated endpoint ContainerID="824bb88973093641425fb6d340fb5fe956c34bd6860d21656fcefbcadfe7b54c" Namespace="calico-apiserver" Pod="calico-apiserver-c5f78578d-x98kx" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-calico--apiserver--c5f78578d--x98kx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--6a956dd616-k8s-calico--apiserver--c5f78578d--x98kx-eth0", GenerateName:"calico-apiserver-c5f78578d-", Namespace:"calico-apiserver", SelfLink:"", UID:"4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 32, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c5f78578d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-6a956dd616", ContainerID:"", Pod:"calico-apiserver-c5f78578d-x98kx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e0d3805531", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:33:23.115730 containerd[1709]: 2024-12-13 13:33:23.071 [INFO][5361] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.6/32] ContainerID="824bb88973093641425fb6d340fb5fe956c34bd6860d21656fcefbcadfe7b54c" Namespace="calico-apiserver" Pod="calico-apiserver-c5f78578d-x98kx" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-calico--apiserver--c5f78578d--x98kx-eth0" Dec 13 13:33:23.115730 containerd[1709]: 2024-12-13 13:33:23.071 [INFO][5361] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e0d3805531 ContainerID="824bb88973093641425fb6d340fb5fe956c34bd6860d21656fcefbcadfe7b54c" Namespace="calico-apiserver" Pod="calico-apiserver-c5f78578d-x98kx" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-calico--apiserver--c5f78578d--x98kx-eth0" Dec 13 13:33:23.115730 containerd[1709]: 2024-12-13 13:33:23.082 [INFO][5361] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="824bb88973093641425fb6d340fb5fe956c34bd6860d21656fcefbcadfe7b54c" Namespace="calico-apiserver" Pod="calico-apiserver-c5f78578d-x98kx" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-calico--apiserver--c5f78578d--x98kx-eth0" Dec 13 13:33:23.115730 containerd[1709]: 2024-12-13 13:33:23.082 [INFO][5361] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="824bb88973093641425fb6d340fb5fe956c34bd6860d21656fcefbcadfe7b54c" Namespace="calico-apiserver" Pod="calico-apiserver-c5f78578d-x98kx" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-calico--apiserver--c5f78578d--x98kx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--6a956dd616-k8s-calico--apiserver--c5f78578d--x98kx-eth0", GenerateName:"calico-apiserver-c5f78578d-", Namespace:"calico-apiserver", SelfLink:"", UID:"4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 32, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c5f78578d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-6a956dd616", ContainerID:"824bb88973093641425fb6d340fb5fe956c34bd6860d21656fcefbcadfe7b54c", Pod:"calico-apiserver-c5f78578d-x98kx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e0d3805531", MAC:"4a:fd:4b:88:f4:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:33:23.115730 containerd[1709]: 2024-12-13 13:33:23.107 [INFO][5361] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="824bb88973093641425fb6d340fb5fe956c34bd6860d21656fcefbcadfe7b54c" Namespace="calico-apiserver" Pod="calico-apiserver-c5f78578d-x98kx" WorkloadEndpoint="ci--4186.0.0--a--6a956dd616-k8s-calico--apiserver--c5f78578d--x98kx-eth0" Dec 13 13:33:23.122799 containerd[1709]: time="2024-12-13T13:33:23.121199719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:23.122799 containerd[1709]: time="2024-12-13T13:33:23.121254621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:23.122799 containerd[1709]: time="2024-12-13T13:33:23.121359723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:23.122799 containerd[1709]: time="2024-12-13T13:33:23.121677032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:23.138387 containerd[1709]: time="2024-12-13T13:33:23.138218966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:23.138387 containerd[1709]: time="2024-12-13T13:33:23.138281868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:23.138387 containerd[1709]: time="2024-12-13T13:33:23.138301168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:23.139265 containerd[1709]: time="2024-12-13T13:33:23.138732280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:23.178011 systemd[1]: Started cri-containerd-df2147eaad933eea9d0f1024eb20e4379b2d7759cd2dabfebebe58648e5bf41b.scope - libcontainer container df2147eaad933eea9d0f1024eb20e4379b2d7759cd2dabfebebe58648e5bf41b. Dec 13 13:33:23.183873 containerd[1709]: time="2024-12-13T13:33:23.183823364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5f78578d-dhdkn,Uid:e90719d9-2bf9-4651-a5b8-e332bf6846fe,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"0cb6694f317f90ebaec302b3d33f8d167e6bbee7e9e0356fe4af7ca916d1bd57\"" Dec 13 13:33:23.203949 systemd[1]: Started cri-containerd-c18ae31a2a3015a4372481875897d1840c557354f5e6da8b68f59b88bd27e7d9.scope - libcontainer container c18ae31a2a3015a4372481875897d1840c557354f5e6da8b68f59b88bd27e7d9. Dec 13 13:33:23.211863 containerd[1709]: time="2024-12-13T13:33:23.211631695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxst4,Uid:d48882ed-a3fb-4cc6-a051-3acab30e260b,Namespace:kube-system,Attempt:5,} returns sandbox id \"2216e9a9e9a0201d6e0e519f4ed2a084e77e8c1c57336c9fd6469f7f8bc9c9dc\"" Dec 13 13:33:23.222265 containerd[1709]: time="2024-12-13T13:33:23.222182072Z" level=info msg="CreateContainer within sandbox \"2216e9a9e9a0201d6e0e519f4ed2a084e77e8c1c57336c9fd6469f7f8bc9c9dc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:33:23.240917 containerd[1709]: time="2024-12-13T13:33:23.240841962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:23.241313 containerd[1709]: time="2024-12-13T13:33:23.240920964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:23.241313 containerd[1709]: time="2024-12-13T13:33:23.240945865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:23.243215 containerd[1709]: time="2024-12-13T13:33:23.241867489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:23.279173 systemd[1]: Started cri-containerd-824bb88973093641425fb6d340fb5fe956c34bd6860d21656fcefbcadfe7b54c.scope - libcontainer container 824bb88973093641425fb6d340fb5fe956c34bd6860d21656fcefbcadfe7b54c. Dec 13 13:33:23.312207 containerd[1709]: time="2024-12-13T13:33:23.312100634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-88gf8,Uid:3098ed4c-c400-4c97-958d-d1930afff8ed,Namespace:kube-system,Attempt:6,} returns sandbox id \"df2147eaad933eea9d0f1024eb20e4379b2d7759cd2dabfebebe58648e5bf41b\"" Dec 13 13:33:23.323769 containerd[1709]: time="2024-12-13T13:33:23.323379631Z" level=info msg="CreateContainer within sandbox \"df2147eaad933eea9d0f1024eb20e4379b2d7759cd2dabfebebe58648e5bf41b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:33:23.382998 containerd[1709]: time="2024-12-13T13:33:23.382952396Z" level=info msg="CreateContainer within sandbox \"2216e9a9e9a0201d6e0e519f4ed2a084e77e8c1c57336c9fd6469f7f8bc9c9dc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6c3680aab54bf7bfc2321ccd857865891408097fbdede68e0f3a8c8f0568575c\"" Dec 13 13:33:23.385000 containerd[1709]: time="2024-12-13T13:33:23.384970049Z" level=info msg="StartContainer for \"6c3680aab54bf7bfc2321ccd857865891408097fbdede68e0f3a8c8f0568575c\"" Dec 13 13:33:23.410721 containerd[1709]: time="2024-12-13T13:33:23.410681024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5f78578d-x98kx,Uid:4966e6b9-08e9-49c8-9ca0-aea5b2c4ccef,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"824bb88973093641425fb6d340fb5fe956c34bd6860d21656fcefbcadfe7b54c\"" Dec 13 13:33:23.424284 containerd[1709]: time="2024-12-13T13:33:23.424070076Z" level=info msg="CreateContainer within sandbox \"df2147eaad933eea9d0f1024eb20e4379b2d7759cd2dabfebebe58648e5bf41b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1f4ff1d223e2ad777cb8c332393829debb7155f84810c1790dbcf1d1aa60ab85\"" Dec 13 13:33:23.426397 containerd[1709]: time="2024-12-13T13:33:23.425221306Z" level=info msg="StartContainer for \"1f4ff1d223e2ad777cb8c332393829debb7155f84810c1790dbcf1d1aa60ab85\"" Dec 13 13:33:23.442617 systemd[1]: Started cri-containerd-6c3680aab54bf7bfc2321ccd857865891408097fbdede68e0f3a8c8f0568575c.scope - libcontainer container 6c3680aab54bf7bfc2321ccd857865891408097fbdede68e0f3a8c8f0568575c. Dec 13 13:33:23.471826 kernel: bpftool[5825]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 13:33:23.492983 systemd[1]: Started cri-containerd-1f4ff1d223e2ad777cb8c332393829debb7155f84810c1790dbcf1d1aa60ab85.scope - libcontainer container 1f4ff1d223e2ad777cb8c332393829debb7155f84810c1790dbcf1d1aa60ab85. Dec 13 13:33:23.510260 containerd[1709]: time="2024-12-13T13:33:23.510224839Z" level=info msg="StartContainer for \"6c3680aab54bf7bfc2321ccd857865891408097fbdede68e0f3a8c8f0568575c\" returns successfully" Dec 13 13:33:23.551091 containerd[1709]: time="2024-12-13T13:33:23.551048512Z" level=info msg="StartContainer for \"1f4ff1d223e2ad777cb8c332393829debb7155f84810c1790dbcf1d1aa60ab85\" returns successfully" Dec 13 13:33:23.590870 containerd[1709]: time="2024-12-13T13:33:23.590169540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cbfd9d889-rxm9h,Uid:0c845263-e633-4055-81f9-4aa28ad32b74,Namespace:calico-system,Attempt:6,} returns sandbox id \"c18ae31a2a3015a4372481875897d1840c557354f5e6da8b68f59b88bd27e7d9\"" Dec 13 13:33:24.028941 systemd-networkd[1499]: cali3c0e292e2ee: Gained IPv6LL Dec 13 13:33:24.183006 systemd-networkd[1499]: vxlan.calico: Link UP Dec 13 13:33:24.183017 systemd-networkd[1499]: vxlan.calico: Gained carrier Dec 13 13:33:24.404477 kubelet[3413]: I1213 13:33:24.403333 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-88gf8" podStartSLOduration=62.403311402 podStartE2EDuration="1m2.403311402s" podCreationTimestamp="2024-12-13 13:32:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:33:24.402961092 +0000 UTC m=+78.774098170" watchObservedRunningTime="2024-12-13 13:33:24.403311402 +0000 UTC m=+78.774448680" Dec 13 13:33:24.404477 kubelet[3413]: I1213 13:33:24.403477 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nxst4" podStartSLOduration=62.403469106 podStartE2EDuration="1m2.403469106s" podCreationTimestamp="2024-12-13 13:32:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:33:24.378307945 +0000 UTC m=+78.749445023" watchObservedRunningTime="2024-12-13 13:33:24.403469106 +0000 UTC m=+78.774606184" Dec 13 13:33:24.479770 systemd-networkd[1499]: cali1c4477f8cb9: Gained IPv6LL Dec 13 13:33:24.541433 systemd-networkd[1499]: cali124de79cc83: Gained IPv6LL Dec 13 13:33:24.698226 containerd[1709]: time="2024-12-13T13:33:24.697822839Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:24.700344 containerd[1709]: time="2024-12-13T13:33:24.700210602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 13:33:24.705260 containerd[1709]: time="2024-12-13T13:33:24.705192432Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:24.710444 containerd[1709]: time="2024-12-13T13:33:24.710392869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:24.711265 containerd[1709]: time="2024-12-13T13:33:24.711218291Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.784395978s" Dec 13 13:33:24.711265 containerd[1709]: time="2024-12-13T13:33:24.711257192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 13:33:24.713220 containerd[1709]: time="2024-12-13T13:33:24.713016238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 13:33:24.714228 containerd[1709]: time="2024-12-13T13:33:24.714197069Z" level=info msg="CreateContainer within sandbox \"f34233503d18588cd481c3a2362867e34b2ab2ff9cd615e4ae128b473949c99d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 13:33:24.732881 systemd-networkd[1499]: cali60779944b48: Gained IPv6LL Dec 13 13:33:24.760506 containerd[1709]: time="2024-12-13T13:33:24.760472285Z" level=info msg="CreateContainer within sandbox \"f34233503d18588cd481c3a2362867e34b2ab2ff9cd615e4ae128b473949c99d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b410e7d7f2ef7eb6568ed5c87c0e88ced87144a7d973e003046ef3f46d4e94f9\"" Dec 13 13:33:24.761108 containerd[1709]: time="2024-12-13T13:33:24.760967998Z" level=info msg="StartContainer for \"b410e7d7f2ef7eb6568ed5c87c0e88ced87144a7d973e003046ef3f46d4e94f9\"" Dec 13 13:33:24.798022 systemd[1]: Started cri-containerd-b410e7d7f2ef7eb6568ed5c87c0e88ced87144a7d973e003046ef3f46d4e94f9.scope - libcontainer container b410e7d7f2ef7eb6568ed5c87c0e88ced87144a7d973e003046ef3f46d4e94f9. Dec 13 13:33:24.827184 containerd[1709]: time="2024-12-13T13:33:24.827071034Z" level=info msg="StartContainer for \"b410e7d7f2ef7eb6568ed5c87c0e88ced87144a7d973e003046ef3f46d4e94f9\" returns successfully" Dec 13 13:33:24.861209 systemd-networkd[1499]: cali1e0d3805531: Gained IPv6LL Dec 13 13:33:24.989020 systemd-networkd[1499]: cali0b0c3e7c4d6: Gained IPv6LL Dec 13 13:33:25.629337 systemd-networkd[1499]: vxlan.calico: Gained IPv6LL Dec 13 13:33:27.031965 containerd[1709]: time="2024-12-13T13:33:27.031911258Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:27.034546 containerd[1709]: time="2024-12-13T13:33:27.034472325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 13:33:27.037935 containerd[1709]: time="2024-12-13T13:33:27.037883815Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:27.043011 containerd[1709]: time="2024-12-13T13:33:27.042962848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:27.043848 containerd[1709]: time="2024-12-13T13:33:27.043676567Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.330623128s" Dec 13 13:33:27.043848 containerd[1709]: time="2024-12-13T13:33:27.043714268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 13:33:27.047793 containerd[1709]: time="2024-12-13T13:33:27.047554869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 13:33:27.051445 containerd[1709]: time="2024-12-13T13:33:27.051418071Z" level=info msg="CreateContainer within sandbox \"0cb6694f317f90ebaec302b3d33f8d167e6bbee7e9e0356fe4af7ca916d1bd57\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 13:33:27.100220 containerd[1709]: time="2024-12-13T13:33:27.100174151Z" level=info msg="CreateContainer within sandbox \"0cb6694f317f90ebaec302b3d33f8d167e6bbee7e9e0356fe4af7ca916d1bd57\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3caced7aad3e86cedaad80f5787467cf62b40ead6654eb2bae902ff2724739b0\"" Dec 13 13:33:27.100693 containerd[1709]: time="2024-12-13T13:33:27.100663864Z" level=info msg="StartContainer for \"3caced7aad3e86cedaad80f5787467cf62b40ead6654eb2bae902ff2724739b0\"" Dec 13 13:33:27.140148 systemd[1]: Started cri-containerd-3caced7aad3e86cedaad80f5787467cf62b40ead6654eb2bae902ff2724739b0.scope - libcontainer container 3caced7aad3e86cedaad80f5787467cf62b40ead6654eb2bae902ff2724739b0. Dec 13 13:33:27.188139 containerd[1709]: time="2024-12-13T13:33:27.188026159Z" level=info msg="StartContainer for \"3caced7aad3e86cedaad80f5787467cf62b40ead6654eb2bae902ff2724739b0\" returns successfully" Dec 13 13:33:27.408834 containerd[1709]: time="2024-12-13T13:33:27.408596654Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:27.410955 containerd[1709]: time="2024-12-13T13:33:27.410587006Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 13:33:27.416239 containerd[1709]: time="2024-12-13T13:33:27.416156653Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 368.561282ms" Dec 13 13:33:27.416239 containerd[1709]: time="2024-12-13T13:33:27.416206254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 13:33:27.425322 containerd[1709]: time="2024-12-13T13:33:27.424966084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 13:33:27.427427 containerd[1709]: time="2024-12-13T13:33:27.427282245Z" level=info msg="CreateContainer within sandbox \"824bb88973093641425fb6d340fb5fe956c34bd6860d21656fcefbcadfe7b54c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 13:33:27.479500 containerd[1709]: time="2024-12-13T13:33:27.478875800Z" level=info msg="CreateContainer within sandbox \"824bb88973093641425fb6d340fb5fe956c34bd6860d21656fcefbcadfe7b54c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"744da153d85552df5b6531bf9e580e1ca10d955a5c704129aad6d719f0f795e4\"" Dec 13 13:33:27.481396 containerd[1709]: time="2024-12-13T13:33:27.481182961Z" level=info msg="StartContainer for \"744da153d85552df5b6531bf9e580e1ca10d955a5c704129aad6d719f0f795e4\"" Dec 13 13:33:27.512925 systemd[1]: Started cri-containerd-744da153d85552df5b6531bf9e580e1ca10d955a5c704129aad6d719f0f795e4.scope - libcontainer container 744da153d85552df5b6531bf9e580e1ca10d955a5c704129aad6d719f0f795e4. Dec 13 13:33:27.573645 containerd[1709]: time="2024-12-13T13:33:27.573580288Z" level=info msg="StartContainer for \"744da153d85552df5b6531bf9e580e1ca10d955a5c704129aad6d719f0f795e4\" returns successfully" Dec 13 13:33:28.413944 kubelet[3413]: I1213 13:33:28.413337 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c5f78578d-dhdkn" podStartSLOduration=53.560552233 podStartE2EDuration="57.413314849s" podCreationTimestamp="2024-12-13 13:32:31 +0000 UTC" firstStartedPulling="2024-12-13 13:33:23.192895003 +0000 UTC m=+77.564032181" lastFinishedPulling="2024-12-13 13:33:27.045657719 +0000 UTC m=+81.416794797" observedRunningTime="2024-12-13 13:33:27.420591169 +0000 UTC m=+81.791728247" watchObservedRunningTime="2024-12-13 13:33:28.413314849 +0000 UTC m=+82.784451927" Dec 13 13:33:28.415916 kubelet[3413]: I1213 13:33:28.415348 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c5f78578d-x98kx" podStartSLOduration=53.408030525 podStartE2EDuration="57.415335902s" podCreationTimestamp="2024-12-13 13:32:31 +0000 UTC" firstStartedPulling="2024-12-13 13:33:23.413810506 +0000 UTC m=+77.784947684" lastFinishedPulling="2024-12-13 13:33:27.421115883 +0000 UTC m=+81.792253061" observedRunningTime="2024-12-13 13:33:28.415227799 +0000 UTC m=+82.786364977" watchObservedRunningTime="2024-12-13 13:33:28.415335902 +0000 UTC m=+82.786473080" Dec 13 13:33:29.609300 containerd[1709]: time="2024-12-13T13:33:29.609230167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:29.613757 containerd[1709]: time="2024-12-13T13:33:29.613667684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 13:33:29.624394 containerd[1709]: time="2024-12-13T13:33:29.624280063Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:29.632103 containerd[1709]: time="2024-12-13T13:33:29.632033866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:29.632636 containerd[1709]: time="2024-12-13T13:33:29.632466878Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.207464092s" Dec 13 13:33:29.632636 containerd[1709]: time="2024-12-13T13:33:29.632500179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 13:33:29.634313 containerd[1709]: time="2024-12-13T13:33:29.633892715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 13:33:29.652377 containerd[1709]: time="2024-12-13T13:33:29.652348500Z" level=info msg="CreateContainer within sandbox \"c18ae31a2a3015a4372481875897d1840c557354f5e6da8b68f59b88bd27e7d9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 13:33:29.688916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4240666752.mount: Deactivated successfully. Dec 13 13:33:29.695988 containerd[1709]: time="2024-12-13T13:33:29.695524334Z" level=info msg="CreateContainer within sandbox \"c18ae31a2a3015a4372481875897d1840c557354f5e6da8b68f59b88bd27e7d9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c207754906dd75a9be2ffb8e86e3e4c5d8565324d92c0fdcecd32465891fe4a5\"" Dec 13 13:33:29.697159 containerd[1709]: time="2024-12-13T13:33:29.696961172Z" level=info msg="StartContainer for \"c207754906dd75a9be2ffb8e86e3e4c5d8565324d92c0fdcecd32465891fe4a5\"" Dec 13 13:33:29.741045 systemd[1]: Started cri-containerd-c207754906dd75a9be2ffb8e86e3e4c5d8565324d92c0fdcecd32465891fe4a5.scope - libcontainer container c207754906dd75a9be2ffb8e86e3e4c5d8565324d92c0fdcecd32465891fe4a5. Dec 13 13:33:29.794161 containerd[1709]: time="2024-12-13T13:33:29.794043522Z" level=info msg="StartContainer for \"c207754906dd75a9be2ffb8e86e3e4c5d8565324d92c0fdcecd32465891fe4a5\" returns successfully" Dec 13 13:33:30.425603 kubelet[3413]: I1213 13:33:30.425512 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5cbfd9d889-rxm9h" podStartSLOduration=52.384427905 podStartE2EDuration="58.425490811s" podCreationTimestamp="2024-12-13 13:32:32 +0000 UTC" firstStartedPulling="2024-12-13 13:33:23.592658205 +0000 UTC m=+77.963795283" lastFinishedPulling="2024-12-13 13:33:29.633721111 +0000 UTC m=+84.004858189" observedRunningTime="2024-12-13 13:33:30.422073022 +0000 UTC m=+84.793210200" watchObservedRunningTime="2024-12-13 13:33:30.425490811 +0000 UTC m=+84.796627889" Dec 13 13:33:31.151349 containerd[1709]: time="2024-12-13T13:33:31.151194289Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:31.153484 containerd[1709]: time="2024-12-13T13:33:31.153423048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 13:33:31.156329 containerd[1709]: time="2024-12-13T13:33:31.156234122Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:31.168411 containerd[1709]: time="2024-12-13T13:33:31.168048533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:31.170346 containerd[1709]: time="2024-12-13T13:33:31.170221790Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.536293274s" Dec 13 13:33:31.170346 containerd[1709]: time="2024-12-13T13:33:31.170294292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 13:33:31.176417 containerd[1709]: time="2024-12-13T13:33:31.176377752Z" level=info msg="CreateContainer within sandbox \"f34233503d18588cd481c3a2362867e34b2ab2ff9cd615e4ae128b473949c99d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 13:33:31.347198 containerd[1709]: time="2024-12-13T13:33:31.347137345Z" level=info msg="CreateContainer within sandbox \"f34233503d18588cd481c3a2362867e34b2ab2ff9cd615e4ae128b473949c99d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c84a065d7d100262d3f92bbccd880e4563ed75892e77e69ab3158f972daa9d18\"" Dec 13 13:33:31.347820 containerd[1709]: time="2024-12-13T13:33:31.347778262Z" level=info msg="StartContainer for \"c84a065d7d100262d3f92bbccd880e4563ed75892e77e69ab3158f972daa9d18\"" Dec 13 13:33:31.388928 systemd[1]: Started cri-containerd-c84a065d7d100262d3f92bbccd880e4563ed75892e77e69ab3158f972daa9d18.scope - libcontainer container c84a065d7d100262d3f92bbccd880e4563ed75892e77e69ab3158f972daa9d18. Dec 13 13:33:31.423736 containerd[1709]: time="2024-12-13T13:33:31.423620257Z" level=info msg="StartContainer for \"c84a065d7d100262d3f92bbccd880e4563ed75892e77e69ab3158f972daa9d18\" returns successfully" Dec 13 13:33:31.821426 kubelet[3413]: I1213 13:33:31.821380 3413 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 13:33:31.821426 kubelet[3413]: I1213 13:33:31.821417 3413 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 13:33:32.436777 kubelet[3413]: I1213 13:33:32.435245 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-l7zsr" podStartSLOduration=52.188018196 podStartE2EDuration="1m0.435221474s" podCreationTimestamp="2024-12-13 13:32:32 +0000 UTC" firstStartedPulling="2024-12-13 13:33:22.926076593 +0000 UTC m=+77.297213771" lastFinishedPulling="2024-12-13 13:33:31.173279971 +0000 UTC m=+85.544417049" observedRunningTime="2024-12-13 13:33:32.434830363 +0000 UTC m=+86.805967541" watchObservedRunningTime="2024-12-13 13:33:32.435221474 +0000 UTC m=+86.806358652" Dec 13 13:34:14.727267 systemd[1]: run-containerd-runc-k8s.io-843f11f7b4c9bcd646bbf654fb595beef818af7ae9c2848e50d7d8ee5fc8c06f-runc.lhkRUv.mount: Deactivated successfully. Dec 13 13:34:17.345583 containerd[1709]: time="2024-12-13T13:34:17.345426365Z" level=info msg="StopPodSandbox for \"704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f\"" Dec 13 13:34:17.345583 containerd[1709]: time="2024-12-13T13:34:17.345585569Z" level=info msg="TearDown network for sandbox \"704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f\" successfully" Dec 13 13:34:17.346521 containerd[1709]: time="2024-12-13T13:34:17.345603670Z" level=info msg="StopPodSandbox for \"704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f\" returns successfully" Dec 13 13:34:17.346521 containerd[1709]: time="2024-12-13T13:34:17.345946779Z" level=info msg="RemovePodSandbox for \"704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f\"" Dec 13 13:34:17.346521 containerd[1709]: time="2024-12-13T13:34:17.345984480Z" level=info msg="Forcibly stopping sandbox \"704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f\"" Dec 13 13:34:17.346521 containerd[1709]: time="2024-12-13T13:34:17.346082482Z" level=info msg="TearDown network for sandbox \"704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f\" successfully" Dec 13 13:34:17.358606 containerd[1709]: time="2024-12-13T13:34:17.357991395Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:17.358606 containerd[1709]: time="2024-12-13T13:34:17.358264902Z" level=info msg="RemovePodSandbox \"704f60761319920415ef7839bff831f9b3b4a059a6b4e91608feee38cd93ea7f\" returns successfully" Dec 13 13:34:17.359164 containerd[1709]: time="2024-12-13T13:34:17.359128725Z" level=info msg="StopPodSandbox for \"3c271ddd843a8733aaa85d7220f846415b6297df5eecf5171d1f5ecae68a2560\"" Dec 13 13:34:17.359269 containerd[1709]: time="2024-12-13T13:34:17.359224427Z" level=info msg="TearDown network for sandbox \"3c271ddd843a8733aaa85d7220f846415b6297df5eecf5171d1f5ecae68a2560\" successfully" Dec 13 13:34:17.359311 containerd[1709]: time="2024-12-13T13:34:17.359267529Z" level=info msg="StopPodSandbox for \"3c271ddd843a8733aaa85d7220f846415b6297df5eecf5171d1f5ecae68a2560\" returns successfully" Dec 13 13:34:17.359629 containerd[1709]: time="2024-12-13T13:34:17.359531135Z" level=info msg="RemovePodSandbox for \"3c271ddd843a8733aaa85d7220f846415b6297df5eecf5171d1f5ecae68a2560\"" Dec 13 13:34:17.359629 containerd[1709]: time="2024-12-13T13:34:17.359561336Z" level=info msg="Forcibly stopping sandbox \"3c271ddd843a8733aaa85d7220f846415b6297df5eecf5171d1f5ecae68a2560\"" Dec 13 13:34:17.359882 containerd[1709]: time="2024-12-13T13:34:17.359661039Z" level=info msg="TearDown network for sandbox \"3c271ddd843a8733aaa85d7220f846415b6297df5eecf5171d1f5ecae68a2560\" successfully" Dec 13 13:34:17.372781 containerd[1709]: time="2024-12-13T13:34:17.372622879Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c271ddd843a8733aaa85d7220f846415b6297df5eecf5171d1f5ecae68a2560\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:17.372910 containerd[1709]: time="2024-12-13T13:34:17.372808884Z" level=info msg="RemovePodSandbox \"3c271ddd843a8733aaa85d7220f846415b6297df5eecf5171d1f5ecae68a2560\" returns successfully" Dec 13 13:34:17.373194 containerd[1709]: time="2024-12-13T13:34:17.373093892Z" level=info msg="StopPodSandbox for \"342dd7b03388e0d012a00600b5cc7c16bc0230f19e9343cbd40a74c3c9442a41\"" Dec 13 13:34:17.373194 containerd[1709]: time="2024-12-13T13:34:17.373191094Z" level=info msg="TearDown network for sandbox \"342dd7b03388e0d012a00600b5cc7c16bc0230f19e9343cbd40a74c3c9442a41\" successfully" Dec 13 13:34:17.373194 containerd[1709]: time="2024-12-13T13:34:17.373207995Z" level=info msg="StopPodSandbox for \"342dd7b03388e0d012a00600b5cc7c16bc0230f19e9343cbd40a74c3c9442a41\" returns successfully" Dec 13 13:34:17.373534 containerd[1709]: time="2024-12-13T13:34:17.373495702Z" level=info msg="RemovePodSandbox for \"342dd7b03388e0d012a00600b5cc7c16bc0230f19e9343cbd40a74c3c9442a41\"" Dec 13 13:34:17.373534 containerd[1709]: time="2024-12-13T13:34:17.373524403Z" level=info msg="Forcibly stopping sandbox \"342dd7b03388e0d012a00600b5cc7c16bc0230f19e9343cbd40a74c3c9442a41\"" Dec 13 13:34:17.373665 containerd[1709]: time="2024-12-13T13:34:17.373606605Z" level=info msg="TearDown network for sandbox \"342dd7b03388e0d012a00600b5cc7c16bc0230f19e9343cbd40a74c3c9442a41\" successfully" Dec 13 13:34:17.382592 containerd[1709]: time="2024-12-13T13:34:17.382388936Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"342dd7b03388e0d012a00600b5cc7c16bc0230f19e9343cbd40a74c3c9442a41\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:17.382592 containerd[1709]: time="2024-12-13T13:34:17.382539040Z" level=info msg="RemovePodSandbox \"342dd7b03388e0d012a00600b5cc7c16bc0230f19e9343cbd40a74c3c9442a41\" returns successfully" Dec 13 13:34:17.382995 containerd[1709]: time="2024-12-13T13:34:17.382942850Z" level=info msg="StopPodSandbox for \"3c3a846eabfd72932183004cbf7396154aab7eaf2a31f08fdd56c21c6cf36267\"" Dec 13 13:34:17.383080 containerd[1709]: time="2024-12-13T13:34:17.383034753Z" level=info msg="TearDown network for sandbox \"3c3a846eabfd72932183004cbf7396154aab7eaf2a31f08fdd56c21c6cf36267\" successfully" Dec 13 13:34:17.383080 containerd[1709]: time="2024-12-13T13:34:17.383050353Z" level=info msg="StopPodSandbox for \"3c3a846eabfd72932183004cbf7396154aab7eaf2a31f08fdd56c21c6cf36267\" returns successfully" Dec 13 13:34:17.383467 containerd[1709]: time="2024-12-13T13:34:17.383336361Z" level=info msg="RemovePodSandbox for \"3c3a846eabfd72932183004cbf7396154aab7eaf2a31f08fdd56c21c6cf36267\"" Dec 13 13:34:17.383467 containerd[1709]: time="2024-12-13T13:34:17.383361361Z" level=info msg="Forcibly stopping sandbox \"3c3a846eabfd72932183004cbf7396154aab7eaf2a31f08fdd56c21c6cf36267\"" Dec 13 13:34:17.383467 containerd[1709]: time="2024-12-13T13:34:17.383432963Z" level=info msg="TearDown network for sandbox \"3c3a846eabfd72932183004cbf7396154aab7eaf2a31f08fdd56c21c6cf36267\" successfully" Dec 13 13:34:17.391412 containerd[1709]: time="2024-12-13T13:34:17.391268469Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c3a846eabfd72932183004cbf7396154aab7eaf2a31f08fdd56c21c6cf36267\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:17.391709 containerd[1709]: time="2024-12-13T13:34:17.391447974Z" level=info msg="RemovePodSandbox \"3c3a846eabfd72932183004cbf7396154aab7eaf2a31f08fdd56c21c6cf36267\" returns successfully" Dec 13 13:34:17.392373 containerd[1709]: time="2024-12-13T13:34:17.392308596Z" level=info msg="StopPodSandbox for \"84951abfc4cbe4cdd6984abfabfcf1ff2baac47169cc0028add8cdc28ccbe7c6\"" Dec 13 13:34:17.392467 containerd[1709]: time="2024-12-13T13:34:17.392403899Z" level=info msg="TearDown network for sandbox \"84951abfc4cbe4cdd6984abfabfcf1ff2baac47169cc0028add8cdc28ccbe7c6\" successfully" Dec 13 13:34:17.392467 containerd[1709]: time="2024-12-13T13:34:17.392419499Z" level=info msg="StopPodSandbox for \"84951abfc4cbe4cdd6984abfabfcf1ff2baac47169cc0028add8cdc28ccbe7c6\" returns successfully" Dec 13 13:34:17.392832 containerd[1709]: time="2024-12-13T13:34:17.392712607Z" level=info msg="RemovePodSandbox for \"84951abfc4cbe4cdd6984abfabfcf1ff2baac47169cc0028add8cdc28ccbe7c6\"" Dec 13 13:34:17.392832 containerd[1709]: time="2024-12-13T13:34:17.392737208Z" level=info msg="Forcibly stopping sandbox \"84951abfc4cbe4cdd6984abfabfcf1ff2baac47169cc0028add8cdc28ccbe7c6\"" Dec 13 13:34:17.392962 containerd[1709]: time="2024-12-13T13:34:17.392844711Z" level=info msg="TearDown network for sandbox \"84951abfc4cbe4cdd6984abfabfcf1ff2baac47169cc0028add8cdc28ccbe7c6\" successfully" Dec 13 13:34:17.402799 containerd[1709]: time="2024-12-13T13:34:17.402766771Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"84951abfc4cbe4cdd6984abfabfcf1ff2baac47169cc0028add8cdc28ccbe7c6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:17.402885 containerd[1709]: time="2024-12-13T13:34:17.402812072Z" level=info msg="RemovePodSandbox \"84951abfc4cbe4cdd6984abfabfcf1ff2baac47169cc0028add8cdc28ccbe7c6\" returns successfully" Dec 13 13:34:17.403197 containerd[1709]: time="2024-12-13T13:34:17.403157381Z" level=info msg="StopPodSandbox for \"54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30\"" Dec 13 13:34:17.403306 containerd[1709]: time="2024-12-13T13:34:17.403250084Z" level=info msg="TearDown network for sandbox \"54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30\" successfully" Dec 13 13:34:17.403306 containerd[1709]: time="2024-12-13T13:34:17.403264684Z" level=info msg="StopPodSandbox for \"54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30\" returns successfully" Dec 13 13:34:17.403637 containerd[1709]: time="2024-12-13T13:34:17.403602993Z" level=info msg="RemovePodSandbox for \"54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30\"" Dec 13 13:34:17.403637 containerd[1709]: time="2024-12-13T13:34:17.403631394Z" level=info msg="Forcibly stopping sandbox \"54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30\"" Dec 13 13:34:17.403769 containerd[1709]: time="2024-12-13T13:34:17.403711396Z" level=info msg="TearDown network for sandbox \"54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30\" successfully" Dec 13 13:34:17.410868 containerd[1709]: time="2024-12-13T13:34:17.410737681Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:17.410947 containerd[1709]: time="2024-12-13T13:34:17.410887885Z" level=info msg="RemovePodSandbox \"54b312fc9ea25501431b5915fbbf37f0f00daa1273ab614c6a0fa7ab20678e30\" returns successfully" Dec 13 13:34:17.411258 containerd[1709]: time="2024-12-13T13:34:17.411230694Z" level=info msg="StopPodSandbox for \"9a58c99c21e5816afadb4c83f1b888827e5712ec38030fec9f24c5e6e005d3e7\"" Dec 13 13:34:17.411361 containerd[1709]: time="2024-12-13T13:34:17.411341196Z" level=info msg="TearDown network for sandbox \"9a58c99c21e5816afadb4c83f1b888827e5712ec38030fec9f24c5e6e005d3e7\" successfully" Dec 13 13:34:17.411414 containerd[1709]: time="2024-12-13T13:34:17.411359297Z" level=info msg="StopPodSandbox for \"9a58c99c21e5816afadb4c83f1b888827e5712ec38030fec9f24c5e6e005d3e7\" returns successfully" Dec 13 13:34:17.411728 containerd[1709]: time="2024-12-13T13:34:17.411646504Z" level=info msg="RemovePodSandbox for \"9a58c99c21e5816afadb4c83f1b888827e5712ec38030fec9f24c5e6e005d3e7\"" Dec 13 13:34:17.411728 containerd[1709]: time="2024-12-13T13:34:17.411674805Z" level=info msg="Forcibly stopping sandbox \"9a58c99c21e5816afadb4c83f1b888827e5712ec38030fec9f24c5e6e005d3e7\"" Dec 13 13:34:17.411888 containerd[1709]: time="2024-12-13T13:34:17.411764408Z" level=info msg="TearDown network for sandbox \"9a58c99c21e5816afadb4c83f1b888827e5712ec38030fec9f24c5e6e005d3e7\" successfully" Dec 13 13:34:17.447344 containerd[1709]: time="2024-12-13T13:34:17.447316641Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a58c99c21e5816afadb4c83f1b888827e5712ec38030fec9f24c5e6e005d3e7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:17.447431 containerd[1709]: time="2024-12-13T13:34:17.447362743Z" level=info msg="RemovePodSandbox \"9a58c99c21e5816afadb4c83f1b888827e5712ec38030fec9f24c5e6e005d3e7\" returns successfully" Dec 13 13:34:17.447662 containerd[1709]: time="2024-12-13T13:34:17.447629550Z" level=info msg="StopPodSandbox for \"196a9918fd8a456ad08be866ebdfde331d9ff4ab4265cf2ee04e76934e892869\"" Dec 13 13:34:17.447739 containerd[1709]: time="2024-12-13T13:34:17.447722252Z" level=info msg="TearDown network for sandbox \"196a9918fd8a456ad08be866ebdfde331d9ff4ab4265cf2ee04e76934e892869\" successfully" Dec 13 13:34:17.447808 containerd[1709]: time="2024-12-13T13:34:17.447736052Z" level=info msg="StopPodSandbox for \"196a9918fd8a456ad08be866ebdfde331d9ff4ab4265cf2ee04e76934e892869\" returns successfully" Dec 13 13:34:17.448047 containerd[1709]: time="2024-12-13T13:34:17.448024560Z" level=info msg="RemovePodSandbox for \"196a9918fd8a456ad08be866ebdfde331d9ff4ab4265cf2ee04e76934e892869\"" Dec 13 13:34:17.448155 containerd[1709]: time="2024-12-13T13:34:17.448132463Z" level=info msg="Forcibly stopping sandbox \"196a9918fd8a456ad08be866ebdfde331d9ff4ab4265cf2ee04e76934e892869\"" Dec 13 13:34:17.448250 containerd[1709]: time="2024-12-13T13:34:17.448211765Z" level=info msg="TearDown network for sandbox \"196a9918fd8a456ad08be866ebdfde331d9ff4ab4265cf2ee04e76934e892869\" successfully" Dec 13 13:34:17.457446 containerd[1709]: time="2024-12-13T13:34:17.457315604Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"196a9918fd8a456ad08be866ebdfde331d9ff4ab4265cf2ee04e76934e892869\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:17.457801 containerd[1709]: time="2024-12-13T13:34:17.457454908Z" level=info msg="RemovePodSandbox \"196a9918fd8a456ad08be866ebdfde331d9ff4ab4265cf2ee04e76934e892869\" returns successfully" Dec 13 13:34:17.458116 containerd[1709]: time="2024-12-13T13:34:17.458030323Z" level=info msg="StopPodSandbox for \"eb5f89087b2211df22d16b4b974229ed850c857511d99558c5d39503dd61034d\"" Dec 13 13:34:17.458189 containerd[1709]: time="2024-12-13T13:34:17.458126825Z" level=info msg="TearDown network for sandbox \"eb5f89087b2211df22d16b4b974229ed850c857511d99558c5d39503dd61034d\" successfully" Dec 13 13:34:17.458189 containerd[1709]: time="2024-12-13T13:34:17.458141226Z" level=info msg="StopPodSandbox for \"eb5f89087b2211df22d16b4b974229ed850c857511d99558c5d39503dd61034d\" returns successfully" Dec 13 13:34:17.458434 containerd[1709]: time="2024-12-13T13:34:17.458411533Z" level=info msg="RemovePodSandbox for \"eb5f89087b2211df22d16b4b974229ed850c857511d99558c5d39503dd61034d\"" Dec 13 13:34:17.458503 containerd[1709]: time="2024-12-13T13:34:17.458439334Z" level=info msg="Forcibly stopping sandbox \"eb5f89087b2211df22d16b4b974229ed850c857511d99558c5d39503dd61034d\"" Dec 13 13:34:17.458556 containerd[1709]: time="2024-12-13T13:34:17.458506335Z" level=info msg="TearDown network for sandbox \"eb5f89087b2211df22d16b4b974229ed850c857511d99558c5d39503dd61034d\" successfully" Dec 13 13:34:17.466313 containerd[1709]: time="2024-12-13T13:34:17.466273639Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eb5f89087b2211df22d16b4b974229ed850c857511d99558c5d39503dd61034d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:17.466395 containerd[1709]: time="2024-12-13T13:34:17.466333441Z" level=info msg="RemovePodSandbox \"eb5f89087b2211df22d16b4b974229ed850c857511d99558c5d39503dd61034d\" returns successfully" Dec 13 13:34:17.466712 containerd[1709]: time="2024-12-13T13:34:17.466624649Z" level=info msg="StopPodSandbox for \"837a2f417584cd8e9486506f5513ce7d7c5c6deecba95564b25a913716e5f716\"" Dec 13 13:34:17.466812 containerd[1709]: time="2024-12-13T13:34:17.466718251Z" level=info msg="TearDown network for sandbox \"837a2f417584cd8e9486506f5513ce7d7c5c6deecba95564b25a913716e5f716\" successfully" Dec 13 13:34:17.466812 containerd[1709]: time="2024-12-13T13:34:17.466733952Z" level=info msg="StopPodSandbox for \"837a2f417584cd8e9486506f5513ce7d7c5c6deecba95564b25a913716e5f716\" returns successfully" Dec 13 13:34:17.467117 containerd[1709]: time="2024-12-13T13:34:17.467015059Z" level=info msg="RemovePodSandbox for \"837a2f417584cd8e9486506f5513ce7d7c5c6deecba95564b25a913716e5f716\"" Dec 13 13:34:17.467117 containerd[1709]: time="2024-12-13T13:34:17.467044160Z" level=info msg="Forcibly stopping sandbox \"837a2f417584cd8e9486506f5513ce7d7c5c6deecba95564b25a913716e5f716\"" Dec 13 13:34:17.467240 containerd[1709]: time="2024-12-13T13:34:17.467142062Z" level=info msg="TearDown network for sandbox \"837a2f417584cd8e9486506f5513ce7d7c5c6deecba95564b25a913716e5f716\" successfully" Dec 13 13:34:17.475203 containerd[1709]: time="2024-12-13T13:34:17.475170873Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"837a2f417584cd8e9486506f5513ce7d7c5c6deecba95564b25a913716e5f716\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:17.475281 containerd[1709]: time="2024-12-13T13:34:17.475218874Z" level=info msg="RemovePodSandbox \"837a2f417584cd8e9486506f5513ce7d7c5c6deecba95564b25a913716e5f716\" returns successfully" Dec 13 13:34:17.475579 containerd[1709]: time="2024-12-13T13:34:17.475537583Z" level=info msg="StopPodSandbox for \"93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204\"" Dec 13 13:34:17.475664 containerd[1709]: time="2024-12-13T13:34:17.475626785Z" level=info msg="TearDown network for sandbox \"93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204\" successfully" Dec 13 13:34:17.475664 containerd[1709]: time="2024-12-13T13:34:17.475643286Z" level=info msg="StopPodSandbox for \"93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204\" returns successfully" Dec 13 13:34:17.476022 containerd[1709]: time="2024-12-13T13:34:17.475924093Z" level=info msg="RemovePodSandbox for \"93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204\"" Dec 13 13:34:17.476022 containerd[1709]: time="2024-12-13T13:34:17.475950594Z" level=info msg="Forcibly stopping sandbox \"93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204\"" Dec 13 13:34:17.476168 containerd[1709]: time="2024-12-13T13:34:17.476030196Z" level=info msg="TearDown network for sandbox \"93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204\" successfully" Dec 13 13:34:17.484273 containerd[1709]: time="2024-12-13T13:34:17.484172210Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:17.484902 containerd[1709]: time="2024-12-13T13:34:17.484300413Z" level=info msg="RemovePodSandbox \"93c32877bbb29906e71e1131333be9477472a9b2ab6e42b9525c2ee446401204\" returns successfully" Dec 13 13:34:17.485049 containerd[1709]: time="2024-12-13T13:34:17.485028932Z" level=info msg="StopPodSandbox for \"0caaba6b9e21afd1ab3d7d61cc1a7e6ab2ecaf97d8934be5fa1ddd0f234a5880\"" Dec 13 13:34:17.485198 containerd[1709]: time="2024-12-13T13:34:17.485155135Z" level=info msg="TearDown network for sandbox \"0caaba6b9e21afd1ab3d7d61cc1a7e6ab2ecaf97d8934be5fa1ddd0f234a5880\" successfully" Dec 13 13:34:17.485198 containerd[1709]: time="2024-12-13T13:34:17.485186536Z" level=info msg="StopPodSandbox for \"0caaba6b9e21afd1ab3d7d61cc1a7e6ab2ecaf97d8934be5fa1ddd0f234a5880\" returns successfully" Dec 13 13:34:17.485488 containerd[1709]: time="2024-12-13T13:34:17.485452843Z" level=info msg="RemovePodSandbox for \"0caaba6b9e21afd1ab3d7d61cc1a7e6ab2ecaf97d8934be5fa1ddd0f234a5880\"" Dec 13 13:34:17.485488 containerd[1709]: time="2024-12-13T13:34:17.485481044Z" level=info msg="Forcibly stopping sandbox \"0caaba6b9e21afd1ab3d7d61cc1a7e6ab2ecaf97d8934be5fa1ddd0f234a5880\"" Dec 13 13:34:17.485659 containerd[1709]: time="2024-12-13T13:34:17.485550646Z" level=info msg="TearDown network for sandbox \"0caaba6b9e21afd1ab3d7d61cc1a7e6ab2ecaf97d8934be5fa1ddd0f234a5880\" successfully" Dec 13 13:34:17.494907 containerd[1709]: time="2024-12-13T13:34:17.494875391Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0caaba6b9e21afd1ab3d7d61cc1a7e6ab2ecaf97d8934be5fa1ddd0f234a5880\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:17.494993 containerd[1709]: time="2024-12-13T13:34:17.494927692Z" level=info msg="RemovePodSandbox \"0caaba6b9e21afd1ab3d7d61cc1a7e6ab2ecaf97d8934be5fa1ddd0f234a5880\" returns successfully" Dec 13 13:34:17.495259 containerd[1709]: time="2024-12-13T13:34:17.495205099Z" level=info msg="StopPodSandbox for \"de13cff2245dc4ed18a8c3f6aefb9f47ba419bb6e2c94459fc2257768b9e93cb\"" Dec 13 13:34:17.495379 containerd[1709]: time="2024-12-13T13:34:17.495322002Z" level=info msg="TearDown network for sandbox \"de13cff2245dc4ed18a8c3f6aefb9f47ba419bb6e2c94459fc2257768b9e93cb\" successfully" Dec 13 13:34:17.495379 containerd[1709]: time="2024-12-13T13:34:17.495340503Z" level=info msg="StopPodSandbox for \"de13cff2245dc4ed18a8c3f6aefb9f47ba419bb6e2c94459fc2257768b9e93cb\" returns successfully" Dec 13 13:34:17.495623 containerd[1709]: time="2024-12-13T13:34:17.495602110Z" level=info msg="RemovePodSandbox for \"de13cff2245dc4ed18a8c3f6aefb9f47ba419bb6e2c94459fc2257768b9e93cb\"" Dec 13 13:34:17.495691 containerd[1709]: time="2024-12-13T13:34:17.495628911Z" level=info msg="Forcibly stopping sandbox \"de13cff2245dc4ed18a8c3f6aefb9f47ba419bb6e2c94459fc2257768b9e93cb\"" Dec 13 13:34:17.495768 containerd[1709]: time="2024-12-13T13:34:17.495729513Z" level=info msg="TearDown network for sandbox \"de13cff2245dc4ed18a8c3f6aefb9f47ba419bb6e2c94459fc2257768b9e93cb\" successfully" Dec 13 13:34:17.505621 containerd[1709]: time="2024-12-13T13:34:17.505589772Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"de13cff2245dc4ed18a8c3f6aefb9f47ba419bb6e2c94459fc2257768b9e93cb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:17.505702 containerd[1709]: time="2024-12-13T13:34:17.505641474Z" level=info msg="RemovePodSandbox \"de13cff2245dc4ed18a8c3f6aefb9f47ba419bb6e2c94459fc2257768b9e93cb\" returns successfully" Dec 13 13:34:17.506042 containerd[1709]: time="2024-12-13T13:34:17.505994283Z" level=info msg="StopPodSandbox for \"05e34531ee02562434315e5486898af33952f14e9aad78adf66cb7fc1e3fc784\"" Dec 13 13:34:17.506114 containerd[1709]: time="2024-12-13T13:34:17.506089685Z" level=info msg="TearDown network for sandbox \"05e34531ee02562434315e5486898af33952f14e9aad78adf66cb7fc1e3fc784\" successfully" Dec 13 13:34:17.506114 containerd[1709]: time="2024-12-13T13:34:17.506104886Z" level=info msg="StopPodSandbox for \"05e34531ee02562434315e5486898af33952f14e9aad78adf66cb7fc1e3fc784\" returns successfully" Dec 13 13:34:17.506451 containerd[1709]: time="2024-12-13T13:34:17.506354292Z" level=info msg="RemovePodSandbox for \"05e34531ee02562434315e5486898af33952f14e9aad78adf66cb7fc1e3fc784\"" Dec 13 13:34:17.506451 containerd[1709]: time="2024-12-13T13:34:17.506382193Z" level=info msg="Forcibly stopping sandbox \"05e34531ee02562434315e5486898af33952f14e9aad78adf66cb7fc1e3fc784\"" Dec 13 13:34:17.506576 containerd[1709]: time="2024-12-13T13:34:17.506459595Z" level=info msg="TearDown network for sandbox \"05e34531ee02562434315e5486898af33952f14e9aad78adf66cb7fc1e3fc784\" successfully" Dec 13 13:34:17.514876 containerd[1709]: time="2024-12-13T13:34:17.514850315Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"05e34531ee02562434315e5486898af33952f14e9aad78adf66cb7fc1e3fc784\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:17.514964 containerd[1709]: time="2024-12-13T13:34:17.514894217Z" level=info msg="RemovePodSandbox \"05e34531ee02562434315e5486898af33952f14e9aad78adf66cb7fc1e3fc784\" returns successfully" Dec 13 13:34:17.515267 containerd[1709]: time="2024-12-13T13:34:17.515231726Z" level=info msg="StopPodSandbox for \"1ddc4d8100ab2bdb82d2f5ac56fc17925f43d5f77f4b02f3d9de4fdb4aad0482\"" Dec 13 13:34:17.515335 containerd[1709]: time="2024-12-13T13:34:17.515322428Z" level=info msg="TearDown network for sandbox \"1ddc4d8100ab2bdb82d2f5ac56fc17925f43d5f77f4b02f3d9de4fdb4aad0482\" successfully" Dec 13 13:34:17.515381 containerd[1709]: time="2024-12-13T13:34:17.515337128Z" level=info msg="StopPodSandbox for \"1ddc4d8100ab2bdb82d2f5ac56fc17925f43d5f77f4b02f3d9de4fdb4aad0482\" returns successfully" Dec 13 13:34:17.515700 containerd[1709]: time="2024-12-13T13:34:17.515672037Z" level=info msg="RemovePodSandbox for \"1ddc4d8100ab2bdb82d2f5ac56fc17925f43d5f77f4b02f3d9de4fdb4aad0482\"" Dec 13 13:34:17.515787 containerd[1709]: time="2024-12-13T13:34:17.515702838Z" level=info msg="Forcibly stopping sandbox \"1ddc4d8100ab2bdb82d2f5ac56fc17925f43d5f77f4b02f3d9de4fdb4aad0482\"" Dec 13 13:34:17.515841 containerd[1709]: time="2024-12-13T13:34:17.515802440Z" level=info msg="TearDown network for sandbox \"1ddc4d8100ab2bdb82d2f5ac56fc17925f43d5f77f4b02f3d9de4fdb4aad0482\" successfully" Dec 13 13:34:17.526803 containerd[1709]: time="2024-12-13T13:34:17.526770729Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1ddc4d8100ab2bdb82d2f5ac56fc17925f43d5f77f4b02f3d9de4fdb4aad0482\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:17.526890 containerd[1709]: time="2024-12-13T13:34:17.526816630Z" level=info msg="RemovePodSandbox \"1ddc4d8100ab2bdb82d2f5ac56fc17925f43d5f77f4b02f3d9de4fdb4aad0482\" returns successfully" Dec 13 13:34:17.527207 containerd[1709]: time="2024-12-13T13:34:17.527117138Z" level=info msg="StopPodSandbox for \"97104bf9c307ad7f13a63a015e8e624b4ff70ad421114bf1a3c93bc55db9e874\"" Dec 13 13:34:17.527321 containerd[1709]: time="2024-12-13T13:34:17.527221340Z" level=info msg="TearDown network for sandbox \"97104bf9c307ad7f13a63a015e8e624b4ff70ad421114bf1a3c93bc55db9e874\" successfully" Dec 13 13:34:17.527321 containerd[1709]: time="2024-12-13T13:34:17.527236641Z" level=info msg="StopPodSandbox for \"97104bf9c307ad7f13a63a015e8e624b4ff70ad421114bf1a3c93bc55db9e874\" returns successfully" Dec 13 13:34:17.527642 containerd[1709]: time="2024-12-13T13:34:17.527596850Z" level=info msg="RemovePodSandbox for \"97104bf9c307ad7f13a63a015e8e624b4ff70ad421114bf1a3c93bc55db9e874\"" Dec 13 13:34:17.527642 containerd[1709]: time="2024-12-13T13:34:17.527628351Z" level=info msg="Forcibly stopping sandbox \"97104bf9c307ad7f13a63a015e8e624b4ff70ad421114bf1a3c93bc55db9e874\"" Dec 13 13:34:17.527793 containerd[1709]: time="2024-12-13T13:34:17.527708753Z" level=info msg="TearDown network for sandbox \"97104bf9c307ad7f13a63a015e8e624b4ff70ad421114bf1a3c93bc55db9e874\" successfully" Dec 13 13:34:17.536662 containerd[1709]: time="2024-12-13T13:34:17.536634988Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"97104bf9c307ad7f13a63a015e8e624b4ff70ad421114bf1a3c93bc55db9e874\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:17.536759 containerd[1709]: time="2024-12-13T13:34:17.536682389Z" level=info msg="RemovePodSandbox \"97104bf9c307ad7f13a63a015e8e624b4ff70ad421114bf1a3c93bc55db9e874\" returns successfully" Dec 13 13:34:17.537033 containerd[1709]: time="2024-12-13T13:34:17.537011398Z" level=info msg="StopPodSandbox for \"a931b2d75904a11f9be88ab0a046dc459cdcf2143c5085ead224e44a863501e0\"" Dec 13 13:34:17.537189 containerd[1709]: time="2024-12-13T13:34:17.537164102Z" level=info msg="TearDown network for sandbox \"a931b2d75904a11f9be88ab0a046dc459cdcf2143c5085ead224e44a863501e0\" successfully" Dec 13 13:34:17.537189 containerd[1709]: time="2024-12-13T13:34:17.537182002Z" level=info msg="StopPodSandbox for \"a931b2d75904a11f9be88ab0a046dc459cdcf2143c5085ead224e44a863501e0\" returns successfully" Dec 13 13:34:17.537538 containerd[1709]: time="2024-12-13T13:34:17.537495310Z" level=info msg="RemovePodSandbox for \"a931b2d75904a11f9be88ab0a046dc459cdcf2143c5085ead224e44a863501e0\"" Dec 13 13:34:17.537538 containerd[1709]: time="2024-12-13T13:34:17.537525711Z" level=info msg="Forcibly stopping sandbox \"a931b2d75904a11f9be88ab0a046dc459cdcf2143c5085ead224e44a863501e0\"" Dec 13 13:34:17.537664 containerd[1709]: time="2024-12-13T13:34:17.537598213Z" level=info msg="TearDown network for sandbox \"a931b2d75904a11f9be88ab0a046dc459cdcf2143c5085ead224e44a863501e0\" successfully" Dec 13 13:34:17.546942 containerd[1709]: time="2024-12-13T13:34:17.546789854Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a931b2d75904a11f9be88ab0a046dc459cdcf2143c5085ead224e44a863501e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:17.546942 containerd[1709]: time="2024-12-13T13:34:17.546868357Z" level=info msg="RemovePodSandbox \"a931b2d75904a11f9be88ab0a046dc459cdcf2143c5085ead224e44a863501e0\" returns successfully" Dec 13 13:34:17.547265 containerd[1709]: time="2024-12-13T13:34:17.547240566Z" level=info msg="StopPodSandbox for \"a68eaa09bed8efdfca6f6157424c2428010a477d8ffb0eace0fdccc76c3c8e3a\"" Dec 13 13:34:17.547486 containerd[1709]: time="2024-12-13T13:34:17.547332469Z" level=info msg="TearDown network for sandbox \"a68eaa09bed8efdfca6f6157424c2428010a477d8ffb0eace0fdccc76c3c8e3a\" successfully" Dec 13 13:34:17.547486 containerd[1709]: time="2024-12-13T13:34:17.547348669Z" level=info msg="StopPodSandbox for \"a68eaa09bed8efdfca6f6157424c2428010a477d8ffb0eace0fdccc76c3c8e3a\" returns successfully" Dec 13 13:34:17.547767 containerd[1709]: time="2024-12-13T13:34:17.547731579Z" level=info msg="RemovePodSandbox for \"a68eaa09bed8efdfca6f6157424c2428010a477d8ffb0eace0fdccc76c3c8e3a\"" Dec 13 13:34:17.547898 containerd[1709]: time="2024-12-13T13:34:17.547875483Z" level=info msg="Forcibly stopping sandbox \"a68eaa09bed8efdfca6f6157424c2428010a477d8ffb0eace0fdccc76c3c8e3a\"" Dec 13 13:34:17.548016 containerd[1709]: time="2024-12-13T13:34:17.547977386Z" level=info msg="TearDown network for sandbox \"a68eaa09bed8efdfca6f6157424c2428010a477d8ffb0eace0fdccc76c3c8e3a\" successfully" Dec 13 13:34:17.575439 containerd[1709]: time="2024-12-13T13:34:17.574993195Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a68eaa09bed8efdfca6f6157424c2428010a477d8ffb0eace0fdccc76c3c8e3a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:17.575439 containerd[1709]: time="2024-12-13T13:34:17.575058597Z" level=info msg="RemovePodSandbox \"a68eaa09bed8efdfca6f6157424c2428010a477d8ffb0eace0fdccc76c3c8e3a\" returns successfully" Dec 13 13:34:17.575439 containerd[1709]: time="2024-12-13T13:34:17.575398506Z" level=info msg="StopPodSandbox for \"79afadad72dced73fe280c9705ede4a852fd434d620b260f3225ccde84e8c424\"" Dec 13 13:34:17.575632 containerd[1709]: time="2024-12-13T13:34:17.575491508Z" level=info msg="TearDown network for sandbox \"79afadad72dced73fe280c9705ede4a852fd434d620b260f3225ccde84e8c424\" successfully" Dec 13 13:34:17.575632 containerd[1709]: time="2024-12-13T13:34:17.575506009Z" level=info msg="StopPodSandbox for \"79afadad72dced73fe280c9705ede4a852fd434d620b260f3225ccde84e8c424\" returns successfully" Dec 13 13:34:17.575901 containerd[1709]: time="2024-12-13T13:34:17.575871118Z" level=info msg="RemovePodSandbox for \"79afadad72dced73fe280c9705ede4a852fd434d620b260f3225ccde84e8c424\"" Dec 13 13:34:17.575977 containerd[1709]: time="2024-12-13T13:34:17.575901719Z" level=info msg="Forcibly stopping sandbox \"79afadad72dced73fe280c9705ede4a852fd434d620b260f3225ccde84e8c424\"" Dec 13 13:34:17.576525 containerd[1709]: time="2024-12-13T13:34:17.575993722Z" level=info msg="TearDown network for sandbox \"79afadad72dced73fe280c9705ede4a852fd434d620b260f3225ccde84e8c424\" successfully" Dec 13 13:34:17.584100 containerd[1709]: time="2024-12-13T13:34:17.584075534Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79afadad72dced73fe280c9705ede4a852fd434d620b260f3225ccde84e8c424\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:17.584230 containerd[1709]: time="2024-12-13T13:34:17.584210337Z" level=info msg="RemovePodSandbox \"79afadad72dced73fe280c9705ede4a852fd434d620b260f3225ccde84e8c424\" returns successfully" Dec 13 13:34:17.587070 containerd[1709]: time="2024-12-13T13:34:17.587044512Z" level=info msg="StopPodSandbox for \"a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d\"" Dec 13 13:34:17.587158 containerd[1709]: time="2024-12-13T13:34:17.587140414Z" level=info msg="TearDown network for sandbox \"a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d\" successfully" Dec 13 13:34:17.587224 containerd[1709]: time="2024-12-13T13:34:17.587159815Z" level=info msg="StopPodSandbox for \"a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d\" returns successfully" Dec 13 13:34:17.588868 containerd[1709]: time="2024-12-13T13:34:17.587455823Z" level=info msg="RemovePodSandbox for \"a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d\"" Dec 13 13:34:17.588868 containerd[1709]: time="2024-12-13T13:34:17.587480123Z" level=info msg="Forcibly stopping sandbox \"a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d\"" Dec 13 13:34:17.588868 containerd[1709]: time="2024-12-13T13:34:17.587532625Z" level=info msg="TearDown network for sandbox \"a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d\" successfully" Dec 13 13:34:17.599347 containerd[1709]: time="2024-12-13T13:34:17.599204031Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:17.599347 containerd[1709]: time="2024-12-13T13:34:17.599252933Z" level=info msg="RemovePodSandbox \"a7e2929faa106b547ee1fee8027c3d62789eed9cab4628115ad748233a311d6d\" returns successfully" Dec 13 13:34:17.600026 containerd[1709]: time="2024-12-13T13:34:17.599860349Z" level=info msg="StopPodSandbox for \"ba1103f40af78d6d15f4673903e9e1a8c1f4422c2a8decd82d4e340b7801de26\"" Dec 13 13:34:17.600026 containerd[1709]: time="2024-12-13T13:34:17.599956751Z" level=info msg="TearDown network for sandbox \"ba1103f40af78d6d15f4673903e9e1a8c1f4422c2a8decd82d4e340b7801de26\" successfully" Dec 13 13:34:17.600026 containerd[1709]: time="2024-12-13T13:34:17.599970952Z" level=info msg="StopPodSandbox for \"ba1103f40af78d6d15f4673903e9e1a8c1f4422c2a8decd82d4e340b7801de26\" returns successfully" Dec 13 13:34:17.600738 containerd[1709]: time="2024-12-13T13:34:17.600553567Z" level=info msg="RemovePodSandbox for \"ba1103f40af78d6d15f4673903e9e1a8c1f4422c2a8decd82d4e340b7801de26\"" Dec 13 13:34:17.600738 containerd[1709]: time="2024-12-13T13:34:17.600588168Z" level=info msg="Forcibly stopping sandbox \"ba1103f40af78d6d15f4673903e9e1a8c1f4422c2a8decd82d4e340b7801de26\"" Dec 13 13:34:17.600738 containerd[1709]: time="2024-12-13T13:34:17.600660370Z" level=info msg="TearDown network for sandbox \"ba1103f40af78d6d15f4673903e9e1a8c1f4422c2a8decd82d4e340b7801de26\" successfully" Dec 13 13:34:17.611293 containerd[1709]: time="2024-12-13T13:34:17.611266248Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ba1103f40af78d6d15f4673903e9e1a8c1f4422c2a8decd82d4e340b7801de26\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:17.611372 containerd[1709]: time="2024-12-13T13:34:17.611323450Z" level=info msg="RemovePodSandbox \"ba1103f40af78d6d15f4673903e9e1a8c1f4422c2a8decd82d4e340b7801de26\" returns successfully" Dec 13 13:34:17.611759 containerd[1709]: time="2024-12-13T13:34:17.611615057Z" level=info msg="StopPodSandbox for \"1229e06291e03b9fff5dbdc37a9003f6c3f71b000e4f51460c46248e82bab707\"" Dec 13 13:34:17.611853 containerd[1709]: time="2024-12-13T13:34:17.611791362Z" level=info msg="TearDown network for sandbox \"1229e06291e03b9fff5dbdc37a9003f6c3f71b000e4f51460c46248e82bab707\" successfully" Dec 13 13:34:17.611853 containerd[1709]: time="2024-12-13T13:34:17.611825763Z" level=info msg="StopPodSandbox for \"1229e06291e03b9fff5dbdc37a9003f6c3f71b000e4f51460c46248e82bab707\" returns successfully" Dec 13 13:34:17.612139 containerd[1709]: time="2024-12-13T13:34:17.612085170Z" level=info msg="RemovePodSandbox for \"1229e06291e03b9fff5dbdc37a9003f6c3f71b000e4f51460c46248e82bab707\"" Dec 13 13:34:17.612139 containerd[1709]: time="2024-12-13T13:34:17.612113170Z" level=info msg="Forcibly stopping sandbox \"1229e06291e03b9fff5dbdc37a9003f6c3f71b000e4f51460c46248e82bab707\"" Dec 13 13:34:17.612248 containerd[1709]: time="2024-12-13T13:34:17.612189472Z" level=info msg="TearDown network for sandbox \"1229e06291e03b9fff5dbdc37a9003f6c3f71b000e4f51460c46248e82bab707\" successfully" Dec 13 13:34:17.624859 containerd[1709]: time="2024-12-13T13:34:17.624832205Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1229e06291e03b9fff5dbdc37a9003f6c3f71b000e4f51460c46248e82bab707\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:17.624970 containerd[1709]: time="2024-12-13T13:34:17.624880306Z" level=info msg="RemovePodSandbox \"1229e06291e03b9fff5dbdc37a9003f6c3f71b000e4f51460c46248e82bab707\" returns successfully" Dec 13 13:34:17.625250 containerd[1709]: time="2024-12-13T13:34:17.625226315Z" level=info msg="StopPodSandbox for \"b4fa4166fc19ff13508cef89fc2c7441c7423198ceac24c7de4d881d946e7ac1\"" Dec 13 13:34:17.625381 containerd[1709]: time="2024-12-13T13:34:17.625323317Z" level=info msg="TearDown network for sandbox \"b4fa4166fc19ff13508cef89fc2c7441c7423198ceac24c7de4d881d946e7ac1\" successfully" Dec 13 13:34:17.625381 containerd[1709]: time="2024-12-13T13:34:17.625343018Z" level=info msg="StopPodSandbox for \"b4fa4166fc19ff13508cef89fc2c7441c7423198ceac24c7de4d881d946e7ac1\" returns successfully" Dec 13 13:34:17.625717 containerd[1709]: time="2024-12-13T13:34:17.625615025Z" level=info msg="RemovePodSandbox for \"b4fa4166fc19ff13508cef89fc2c7441c7423198ceac24c7de4d881d946e7ac1\"" Dec 13 13:34:17.625717 containerd[1709]: time="2024-12-13T13:34:17.625645326Z" level=info msg="Forcibly stopping sandbox \"b4fa4166fc19ff13508cef89fc2c7441c7423198ceac24c7de4d881d946e7ac1\"" Dec 13 13:34:17.625951 containerd[1709]: time="2024-12-13T13:34:17.625723228Z" level=info msg="TearDown network for sandbox \"b4fa4166fc19ff13508cef89fc2c7441c7423198ceac24c7de4d881d946e7ac1\" successfully" Dec 13 13:34:17.647588 containerd[1709]: time="2024-12-13T13:34:17.647543701Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b4fa4166fc19ff13508cef89fc2c7441c7423198ceac24c7de4d881d946e7ac1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:17.647697 containerd[1709]: time="2024-12-13T13:34:17.647604303Z" level=info msg="RemovePodSandbox \"b4fa4166fc19ff13508cef89fc2c7441c7423198ceac24c7de4d881d946e7ac1\" returns successfully" Dec 13 13:34:17.648114 containerd[1709]: time="2024-12-13T13:34:17.648078615Z" level=info msg="StopPodSandbox for \"5a5d07b3cce135bf70e5eeb5232605e4fd9f4b422d2822964313cae9dfea8cb0\"" Dec 13 13:34:17.648250 containerd[1709]: time="2024-12-13T13:34:17.648193618Z" level=info msg="TearDown network for sandbox \"5a5d07b3cce135bf70e5eeb5232605e4fd9f4b422d2822964313cae9dfea8cb0\" successfully" Dec 13 13:34:17.648250 containerd[1709]: time="2024-12-13T13:34:17.648217119Z" level=info msg="StopPodSandbox for \"5a5d07b3cce135bf70e5eeb5232605e4fd9f4b422d2822964313cae9dfea8cb0\" returns successfully" Dec 13 13:34:17.648604 containerd[1709]: time="2024-12-13T13:34:17.648575728Z" level=info msg="RemovePodSandbox for \"5a5d07b3cce135bf70e5eeb5232605e4fd9f4b422d2822964313cae9dfea8cb0\"" Dec 13 13:34:17.648686 containerd[1709]: time="2024-12-13T13:34:17.648613829Z" level=info msg="Forcibly stopping sandbox \"5a5d07b3cce135bf70e5eeb5232605e4fd9f4b422d2822964313cae9dfea8cb0\"" Dec 13 13:34:17.648846 containerd[1709]: time="2024-12-13T13:34:17.648707432Z" level=info msg="TearDown network for sandbox \"5a5d07b3cce135bf70e5eeb5232605e4fd9f4b422d2822964313cae9dfea8cb0\" successfully" Dec 13 13:34:17.694727 containerd[1709]: time="2024-12-13T13:34:17.694370331Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5a5d07b3cce135bf70e5eeb5232605e4fd9f4b422d2822964313cae9dfea8cb0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:17.694727 containerd[1709]: time="2024-12-13T13:34:17.694557536Z" level=info msg="RemovePodSandbox \"5a5d07b3cce135bf70e5eeb5232605e4fd9f4b422d2822964313cae9dfea8cb0\" returns successfully" Dec 13 13:34:17.695187 containerd[1709]: time="2024-12-13T13:34:17.695159852Z" level=info msg="StopPodSandbox for \"07baee6b93d4ebddbb437e721964155527bf842caa5b61e8c5b8d83dd75239fb\"" Dec 13 13:34:17.695367 containerd[1709]: time="2024-12-13T13:34:17.695342757Z" level=info msg="TearDown network for sandbox \"07baee6b93d4ebddbb437e721964155527bf842caa5b61e8c5b8d83dd75239fb\" successfully" Dec 13 13:34:17.695458 containerd[1709]: time="2024-12-13T13:34:17.695367957Z" level=info msg="StopPodSandbox for \"07baee6b93d4ebddbb437e721964155527bf842caa5b61e8c5b8d83dd75239fb\" returns successfully" Dec 13 13:34:17.695837 containerd[1709]: time="2024-12-13T13:34:17.695805369Z" level=info msg="RemovePodSandbox for \"07baee6b93d4ebddbb437e721964155527bf842caa5b61e8c5b8d83dd75239fb\"" Dec 13 13:34:17.695944 containerd[1709]: time="2024-12-13T13:34:17.695836170Z" level=info msg="Forcibly stopping sandbox \"07baee6b93d4ebddbb437e721964155527bf842caa5b61e8c5b8d83dd75239fb\"" Dec 13 13:34:17.696012 containerd[1709]: time="2024-12-13T13:34:17.695936472Z" level=info msg="TearDown network for sandbox \"07baee6b93d4ebddbb437e721964155527bf842caa5b61e8c5b8d83dd75239fb\" successfully" Dec 13 13:34:17.722487 containerd[1709]: time="2024-12-13T13:34:17.722455769Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"07baee6b93d4ebddbb437e721964155527bf842caa5b61e8c5b8d83dd75239fb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:17.722583 containerd[1709]: time="2024-12-13T13:34:17.722509770Z" level=info msg="RemovePodSandbox \"07baee6b93d4ebddbb437e721964155527bf842caa5b61e8c5b8d83dd75239fb\" returns successfully" Dec 13 13:34:17.722993 containerd[1709]: time="2024-12-13T13:34:17.722900981Z" level=info msg="StopPodSandbox for \"2eecb83c5ad9afe1955da766d7ee59454b23acf7dec99b382c39b37b867da2e9\"" Dec 13 13:34:17.723081 containerd[1709]: time="2024-12-13T13:34:17.723003183Z" level=info msg="TearDown network for sandbox \"2eecb83c5ad9afe1955da766d7ee59454b23acf7dec99b382c39b37b867da2e9\" successfully" Dec 13 13:34:17.723081 containerd[1709]: time="2024-12-13T13:34:17.723020684Z" level=info msg="StopPodSandbox for \"2eecb83c5ad9afe1955da766d7ee59454b23acf7dec99b382c39b37b867da2e9\" returns successfully" Dec 13 13:34:17.723361 containerd[1709]: time="2024-12-13T13:34:17.723324392Z" level=info msg="RemovePodSandbox for \"2eecb83c5ad9afe1955da766d7ee59454b23acf7dec99b382c39b37b867da2e9\"" Dec 13 13:34:17.723361 containerd[1709]: time="2024-12-13T13:34:17.723353593Z" level=info msg="Forcibly stopping sandbox \"2eecb83c5ad9afe1955da766d7ee59454b23acf7dec99b382c39b37b867da2e9\"" Dec 13 13:34:17.723488 containerd[1709]: time="2024-12-13T13:34:17.723428895Z" level=info msg="TearDown network for sandbox \"2eecb83c5ad9afe1955da766d7ee59454b23acf7dec99b382c39b37b867da2e9\" successfully" Dec 13 13:34:17.731656 containerd[1709]: time="2024-12-13T13:34:17.731627810Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2eecb83c5ad9afe1955da766d7ee59454b23acf7dec99b382c39b37b867da2e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:17.731761 containerd[1709]: time="2024-12-13T13:34:17.731679611Z" level=info msg="RemovePodSandbox \"2eecb83c5ad9afe1955da766d7ee59454b23acf7dec99b382c39b37b867da2e9\" returns successfully" Dec 13 13:34:17.732022 containerd[1709]: time="2024-12-13T13:34:17.731991420Z" level=info msg="StopPodSandbox for \"da1e69f52601258dcc46c0bd5c0c6406a9cf7411da2e2992a4c7d9f1a24435fe\"" Dec 13 13:34:17.732185 containerd[1709]: time="2024-12-13T13:34:17.732097822Z" level=info msg="TearDown network for sandbox \"da1e69f52601258dcc46c0bd5c0c6406a9cf7411da2e2992a4c7d9f1a24435fe\" successfully" Dec 13 13:34:17.732185 containerd[1709]: time="2024-12-13T13:34:17.732115823Z" level=info msg="StopPodSandbox for \"da1e69f52601258dcc46c0bd5c0c6406a9cf7411da2e2992a4c7d9f1a24435fe\" returns successfully" Dec 13 13:34:17.732924 containerd[1709]: time="2024-12-13T13:34:17.732738239Z" level=info msg="RemovePodSandbox for \"da1e69f52601258dcc46c0bd5c0c6406a9cf7411da2e2992a4c7d9f1a24435fe\"" Dec 13 13:34:17.733108 containerd[1709]: time="2024-12-13T13:34:17.732926944Z" level=info msg="Forcibly stopping sandbox \"da1e69f52601258dcc46c0bd5c0c6406a9cf7411da2e2992a4c7d9f1a24435fe\"" Dec 13 13:34:17.733108 containerd[1709]: time="2024-12-13T13:34:17.733012146Z" level=info msg="TearDown network for sandbox \"da1e69f52601258dcc46c0bd5c0c6406a9cf7411da2e2992a4c7d9f1a24435fe\" successfully" Dec 13 13:34:17.745526 containerd[1709]: time="2024-12-13T13:34:17.745495474Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da1e69f52601258dcc46c0bd5c0c6406a9cf7411da2e2992a4c7d9f1a24435fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:17.745711 containerd[1709]: time="2024-12-13T13:34:17.745549076Z" level=info msg="RemovePodSandbox \"da1e69f52601258dcc46c0bd5c0c6406a9cf7411da2e2992a4c7d9f1a24435fe\" returns successfully" Dec 13 13:34:17.746337 containerd[1709]: time="2024-12-13T13:34:17.745874584Z" level=info msg="StopPodSandbox for \"ea7ce7a3927f9ab640232557df55fbe327e40905eb08afd3b9626766466e8bcb\"" Dec 13 13:34:17.746337 containerd[1709]: time="2024-12-13T13:34:17.745964287Z" level=info msg="TearDown network for sandbox \"ea7ce7a3927f9ab640232557df55fbe327e40905eb08afd3b9626766466e8bcb\" successfully" Dec 13 13:34:17.746337 containerd[1709]: time="2024-12-13T13:34:17.746019888Z" level=info msg="StopPodSandbox for \"ea7ce7a3927f9ab640232557df55fbe327e40905eb08afd3b9626766466e8bcb\" returns successfully" Dec 13 13:34:17.746545 containerd[1709]: time="2024-12-13T13:34:17.746454399Z" level=info msg="RemovePodSandbox for \"ea7ce7a3927f9ab640232557df55fbe327e40905eb08afd3b9626766466e8bcb\"" Dec 13 13:34:17.746545 containerd[1709]: time="2024-12-13T13:34:17.746478000Z" level=info msg="Forcibly stopping sandbox \"ea7ce7a3927f9ab640232557df55fbe327e40905eb08afd3b9626766466e8bcb\"" Dec 13 13:34:17.746676 containerd[1709]: time="2024-12-13T13:34:17.746559602Z" level=info msg="TearDown network for sandbox \"ea7ce7a3927f9ab640232557df55fbe327e40905eb08afd3b9626766466e8bcb\" successfully" Dec 13 13:34:17.758769 containerd[1709]: time="2024-12-13T13:34:17.758722622Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea7ce7a3927f9ab640232557df55fbe327e40905eb08afd3b9626766466e8bcb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:17.758849 containerd[1709]: time="2024-12-13T13:34:17.758796424Z" level=info msg="RemovePodSandbox \"ea7ce7a3927f9ab640232557df55fbe327e40905eb08afd3b9626766466e8bcb\" returns successfully" Dec 13 13:34:45.267119 systemd[1]: Started sshd@7-10.200.8.33:22-10.200.16.10:46400.service - OpenSSH per-connection server daemon (10.200.16.10:46400). Dec 13 13:34:45.983268 sshd[6449]: Accepted publickey for core from 10.200.16.10 port 46400 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:34:45.985643 sshd-session[6449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:45.994830 systemd-logind[1691]: New session 10 of user core. Dec 13 13:34:45.999923 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 13:34:46.545139 sshd[6451]: Connection closed by 10.200.16.10 port 46400 Dec 13 13:34:46.545760 sshd-session[6449]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:46.549648 systemd[1]: sshd@7-10.200.8.33:22-10.200.16.10:46400.service: Deactivated successfully. Dec 13 13:34:46.551854 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 13:34:46.552659 systemd-logind[1691]: Session 10 logged out. Waiting for processes to exit. Dec 13 13:34:46.553644 systemd-logind[1691]: Removed session 10. Dec 13 13:34:51.676044 systemd[1]: Started sshd@8-10.200.8.33:22-10.200.16.10:38348.service - OpenSSH per-connection server daemon (10.200.16.10:38348). Dec 13 13:34:52.384288 sshd[6464]: Accepted publickey for core from 10.200.16.10 port 38348 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:34:52.386121 sshd-session[6464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:52.395505 systemd-logind[1691]: New session 11 of user core. Dec 13 13:34:52.398928 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 13:34:52.937552 sshd[6469]: Connection closed by 10.200.16.10 port 38348 Dec 13 13:34:52.938597 sshd-session[6464]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:52.943097 systemd[1]: sshd@8-10.200.8.33:22-10.200.16.10:38348.service: Deactivated successfully. Dec 13 13:34:52.945368 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 13:34:52.946322 systemd-logind[1691]: Session 11 logged out. Waiting for processes to exit. Dec 13 13:34:52.947721 systemd-logind[1691]: Removed session 11. Dec 13 13:34:58.068030 systemd[1]: Started sshd@9-10.200.8.33:22-10.200.16.10:38364.service - OpenSSH per-connection server daemon (10.200.16.10:38364). Dec 13 13:34:58.775565 sshd[6519]: Accepted publickey for core from 10.200.16.10 port 38364 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:34:58.777333 sshd-session[6519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:58.783676 systemd-logind[1691]: New session 12 of user core. Dec 13 13:34:58.788212 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 13:34:59.335124 sshd[6521]: Connection closed by 10.200.16.10 port 38364 Dec 13 13:34:59.336013 sshd-session[6519]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:59.338861 systemd[1]: sshd@9-10.200.8.33:22-10.200.16.10:38364.service: Deactivated successfully. Dec 13 13:34:59.341258 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 13:34:59.342886 systemd-logind[1691]: Session 12 logged out. Waiting for processes to exit. Dec 13 13:34:59.344026 systemd-logind[1691]: Removed session 12. Dec 13 13:35:04.469048 systemd[1]: Started sshd@10-10.200.8.33:22-10.200.16.10:49196.service - OpenSSH per-connection server daemon (10.200.16.10:49196). Dec 13 13:35:05.177584 sshd[6552]: Accepted publickey for core from 10.200.16.10 port 49196 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:35:05.179232 sshd-session[6552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:05.183905 systemd-logind[1691]: New session 13 of user core. Dec 13 13:35:05.188164 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 13:35:05.733329 sshd[6554]: Connection closed by 10.200.16.10 port 49196 Dec 13 13:35:05.735549 sshd-session[6552]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:05.739154 systemd[1]: sshd@10-10.200.8.33:22-10.200.16.10:49196.service: Deactivated successfully. Dec 13 13:35:05.741658 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 13:35:05.743599 systemd-logind[1691]: Session 13 logged out. Waiting for processes to exit. Dec 13 13:35:05.744796 systemd-logind[1691]: Removed session 13. Dec 13 13:35:10.864062 systemd[1]: Started sshd@11-10.200.8.33:22-10.200.16.10:45234.service - OpenSSH per-connection server daemon (10.200.16.10:45234). Dec 13 13:35:11.574401 sshd[6568]: Accepted publickey for core from 10.200.16.10 port 45234 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:35:11.576282 sshd-session[6568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:11.580801 systemd-logind[1691]: New session 14 of user core. Dec 13 13:35:11.586135 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 13:35:12.128257 sshd[6570]: Connection closed by 10.200.16.10 port 45234 Dec 13 13:35:12.129334 sshd-session[6568]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:12.134072 systemd[1]: sshd@11-10.200.8.33:22-10.200.16.10:45234.service: Deactivated successfully. Dec 13 13:35:12.136557 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 13:35:12.137481 systemd-logind[1691]: Session 14 logged out. Waiting for processes to exit. Dec 13 13:35:12.138486 systemd-logind[1691]: Removed session 14. Dec 13 13:35:12.257074 systemd[1]: Started sshd@12-10.200.8.33:22-10.200.16.10:45242.service - OpenSSH per-connection server daemon (10.200.16.10:45242). Dec 13 13:35:12.965981 sshd[6582]: Accepted publickey for core from 10.200.16.10 port 45242 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:35:12.967508 sshd-session[6582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:12.972518 systemd-logind[1691]: New session 15 of user core. Dec 13 13:35:12.977915 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 13:35:13.554602 sshd[6584]: Connection closed by 10.200.16.10 port 45242 Dec 13 13:35:13.555727 sshd-session[6582]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:13.560488 systemd[1]: sshd@12-10.200.8.33:22-10.200.16.10:45242.service: Deactivated successfully. Dec 13 13:35:13.562659 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 13:35:13.563464 systemd-logind[1691]: Session 15 logged out. Waiting for processes to exit. Dec 13 13:35:13.564533 systemd-logind[1691]: Removed session 15. Dec 13 13:35:13.684029 systemd[1]: Started sshd@13-10.200.8.33:22-10.200.16.10:45248.service - OpenSSH per-connection server daemon (10.200.16.10:45248). Dec 13 13:35:14.393462 sshd[6593]: Accepted publickey for core from 10.200.16.10 port 45248 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:35:14.395274 sshd-session[6593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:14.399788 systemd-logind[1691]: New session 16 of user core. Dec 13 13:35:14.406939 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 13:35:14.947630 sshd[6595]: Connection closed by 10.200.16.10 port 45248 Dec 13 13:35:14.948474 sshd-session[6593]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:14.952975 systemd[1]: sshd@13-10.200.8.33:22-10.200.16.10:45248.service: Deactivated successfully. Dec 13 13:35:14.955348 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 13:35:14.956391 systemd-logind[1691]: Session 16 logged out. Waiting for processes to exit. Dec 13 13:35:14.957365 systemd-logind[1691]: Removed session 16. Dec 13 13:35:20.080070 systemd[1]: Started sshd@14-10.200.8.33:22-10.200.16.10:42804.service - OpenSSH per-connection server daemon (10.200.16.10:42804). Dec 13 13:35:20.790670 sshd[6629]: Accepted publickey for core from 10.200.16.10 port 42804 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:35:20.792458 sshd-session[6629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:20.797136 systemd-logind[1691]: New session 17 of user core. Dec 13 13:35:20.801148 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 13:35:21.346161 sshd[6631]: Connection closed by 10.200.16.10 port 42804 Dec 13 13:35:21.347353 sshd-session[6629]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:21.352087 systemd[1]: sshd@14-10.200.8.33:22-10.200.16.10:42804.service: Deactivated successfully. Dec 13 13:35:21.354342 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 13:35:21.355091 systemd-logind[1691]: Session 17 logged out. Waiting for processes to exit. Dec 13 13:35:21.355952 systemd-logind[1691]: Removed session 17. Dec 13 13:35:26.476079 systemd[1]: Started sshd@15-10.200.8.33:22-10.200.16.10:42806.service - OpenSSH per-connection server daemon (10.200.16.10:42806). Dec 13 13:35:27.186171 sshd[6665]: Accepted publickey for core from 10.200.16.10 port 42806 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:35:27.187997 sshd-session[6665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:27.192957 systemd-logind[1691]: New session 18 of user core. Dec 13 13:35:27.197904 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 13:35:27.742808 sshd[6667]: Connection closed by 10.200.16.10 port 42806 Dec 13 13:35:27.743024 sshd-session[6665]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:27.748019 systemd[1]: sshd@15-10.200.8.33:22-10.200.16.10:42806.service: Deactivated successfully. Dec 13 13:35:27.751634 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 13:35:27.753666 systemd-logind[1691]: Session 18 logged out. Waiting for processes to exit. Dec 13 13:35:27.754638 systemd-logind[1691]: Removed session 18. Dec 13 13:35:32.872059 systemd[1]: Started sshd@16-10.200.8.33:22-10.200.16.10:54432.service - OpenSSH per-connection server daemon (10.200.16.10:54432). Dec 13 13:35:33.581211 sshd[6686]: Accepted publickey for core from 10.200.16.10 port 54432 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:35:33.583030 sshd-session[6686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:33.588896 systemd-logind[1691]: New session 19 of user core. Dec 13 13:35:33.594227 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 13:35:34.134557 sshd[6688]: Connection closed by 10.200.16.10 port 54432 Dec 13 13:35:34.135493 sshd-session[6686]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:34.139774 systemd[1]: sshd@16-10.200.8.33:22-10.200.16.10:54432.service: Deactivated successfully. Dec 13 13:35:34.141887 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 13:35:34.142601 systemd-logind[1691]: Session 19 logged out. Waiting for processes to exit. Dec 13 13:35:34.143679 systemd-logind[1691]: Removed session 19. Dec 13 13:35:39.269044 systemd[1]: Started sshd@17-10.200.8.33:22-10.200.16.10:47688.service - OpenSSH per-connection server daemon (10.200.16.10:47688). Dec 13 13:35:39.978406 sshd[6699]: Accepted publickey for core from 10.200.16.10 port 47688 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:35:39.980228 sshd-session[6699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:39.985845 systemd-logind[1691]: New session 20 of user core. Dec 13 13:35:39.990213 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 13:35:40.532765 sshd[6701]: Connection closed by 10.200.16.10 port 47688 Dec 13 13:35:40.533766 sshd-session[6699]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:40.538236 systemd[1]: sshd@17-10.200.8.33:22-10.200.16.10:47688.service: Deactivated successfully. Dec 13 13:35:40.540777 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 13:35:40.542108 systemd-logind[1691]: Session 20 logged out. Waiting for processes to exit. Dec 13 13:35:40.543355 systemd-logind[1691]: Removed session 20. Dec 13 13:35:40.656965 systemd[1]: Started sshd@18-10.200.8.33:22-10.200.16.10:47704.service - OpenSSH per-connection server daemon (10.200.16.10:47704). Dec 13 13:35:41.375100 sshd[6712]: Accepted publickey for core from 10.200.16.10 port 47704 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:35:41.376644 sshd-session[6712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:41.381540 systemd-logind[1691]: New session 21 of user core. Dec 13 13:35:41.384901 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 13:35:41.977629 sshd[6714]: Connection closed by 10.200.16.10 port 47704 Dec 13 13:35:41.978969 sshd-session[6712]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:41.983535 systemd[1]: sshd@18-10.200.8.33:22-10.200.16.10:47704.service: Deactivated successfully. Dec 13 13:35:41.985941 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 13:35:41.986611 systemd-logind[1691]: Session 21 logged out. Waiting for processes to exit. Dec 13 13:35:41.987572 systemd-logind[1691]: Removed session 21. Dec 13 13:35:42.102836 systemd[1]: Started sshd@19-10.200.8.33:22-10.200.16.10:47718.service - OpenSSH per-connection server daemon (10.200.16.10:47718). Dec 13 13:35:42.814432 sshd[6722]: Accepted publickey for core from 10.200.16.10 port 47718 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:35:42.816033 sshd-session[6722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:42.820944 systemd-logind[1691]: New session 22 of user core. Dec 13 13:35:42.826906 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 13:35:45.062622 sshd[6724]: Connection closed by 10.200.16.10 port 47718 Dec 13 13:35:45.063647 sshd-session[6722]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:45.066864 systemd[1]: sshd@19-10.200.8.33:22-10.200.16.10:47718.service: Deactivated successfully. Dec 13 13:35:45.069119 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 13:35:45.070897 systemd-logind[1691]: Session 22 logged out. Waiting for processes to exit. Dec 13 13:35:45.071930 systemd-logind[1691]: Removed session 22. Dec 13 13:35:45.192087 systemd[1]: Started sshd@20-10.200.8.33:22-10.200.16.10:47722.service - OpenSSH per-connection server daemon (10.200.16.10:47722). Dec 13 13:35:45.900813 sshd[6761]: Accepted publickey for core from 10.200.16.10 port 47722 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:35:45.903135 sshd-session[6761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:45.910413 systemd-logind[1691]: New session 23 of user core. Dec 13 13:35:45.916086 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 13:35:46.666291 sshd[6763]: Connection closed by 10.200.16.10 port 47722 Dec 13 13:35:46.667114 sshd-session[6761]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:46.670633 systemd[1]: sshd@20-10.200.8.33:22-10.200.16.10:47722.service: Deactivated successfully. Dec 13 13:35:46.673309 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 13:35:46.674953 systemd-logind[1691]: Session 23 logged out. Waiting for processes to exit. Dec 13 13:35:46.676205 systemd-logind[1691]: Removed session 23. Dec 13 13:35:46.798009 systemd[1]: Started sshd@21-10.200.8.33:22-10.200.16.10:47738.service - OpenSSH per-connection server daemon (10.200.16.10:47738). Dec 13 13:35:47.508014 sshd[6773]: Accepted publickey for core from 10.200.16.10 port 47738 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:35:47.509671 sshd-session[6773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:47.515515 systemd-logind[1691]: New session 24 of user core. Dec 13 13:35:47.520186 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 13:35:48.064971 sshd[6775]: Connection closed by 10.200.16.10 port 47738 Dec 13 13:35:48.065867 sshd-session[6773]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:48.069089 systemd[1]: sshd@21-10.200.8.33:22-10.200.16.10:47738.service: Deactivated successfully. Dec 13 13:35:48.071385 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 13:35:48.073304 systemd-logind[1691]: Session 24 logged out. Waiting for processes to exit. Dec 13 13:35:48.074417 systemd-logind[1691]: Removed session 24. Dec 13 13:35:53.197104 systemd[1]: Started sshd@22-10.200.8.33:22-10.200.16.10:52838.service - OpenSSH per-connection server daemon (10.200.16.10:52838). Dec 13 13:35:53.553215 systemd[1]: run-containerd-runc-k8s.io-c207754906dd75a9be2ffb8e86e3e4c5d8565324d92c0fdcecd32465891fe4a5-runc.SO5WKC.mount: Deactivated successfully. Dec 13 13:35:53.908012 sshd[6787]: Accepted publickey for core from 10.200.16.10 port 52838 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:35:53.909790 sshd-session[6787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:53.914935 systemd-logind[1691]: New session 25 of user core. Dec 13 13:35:53.920141 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 13:35:54.459540 sshd[6809]: Connection closed by 10.200.16.10 port 52838 Dec 13 13:35:54.460466 sshd-session[6787]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:54.463409 systemd[1]: sshd@22-10.200.8.33:22-10.200.16.10:52838.service: Deactivated successfully. Dec 13 13:35:54.465843 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 13:35:54.467488 systemd-logind[1691]: Session 25 logged out. Waiting for processes to exit. Dec 13 13:35:54.468851 systemd-logind[1691]: Removed session 25. Dec 13 13:35:59.585892 systemd[1]: Started sshd@23-10.200.8.33:22-10.200.16.10:49624.service - OpenSSH per-connection server daemon (10.200.16.10:49624). Dec 13 13:36:00.303576 sshd[6819]: Accepted publickey for core from 10.200.16.10 port 49624 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:36:00.305448 sshd-session[6819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:36:00.310935 systemd-logind[1691]: New session 26 of user core. Dec 13 13:36:00.318901 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 13:36:00.861978 sshd[6821]: Connection closed by 10.200.16.10 port 49624 Dec 13 13:36:00.862843 sshd-session[6819]: pam_unix(sshd:session): session closed for user core Dec 13 13:36:00.869433 systemd-logind[1691]: Session 26 logged out. Waiting for processes to exit. Dec 13 13:36:00.870726 systemd[1]: sshd@23-10.200.8.33:22-10.200.16.10:49624.service: Deactivated successfully. Dec 13 13:36:00.873053 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 13:36:00.875187 systemd-logind[1691]: Removed session 26. Dec 13 13:36:05.988820 systemd[1]: Started sshd@24-10.200.8.33:22-10.200.16.10:49632.service - OpenSSH per-connection server daemon (10.200.16.10:49632). Dec 13 13:36:06.706548 sshd[6860]: Accepted publickey for core from 10.200.16.10 port 49632 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:36:06.708052 sshd-session[6860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:36:06.712998 systemd-logind[1691]: New session 27 of user core. Dec 13 13:36:06.719878 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 13:36:07.261110 sshd[6862]: Connection closed by 10.200.16.10 port 49632 Dec 13 13:36:07.262059 sshd-session[6860]: pam_unix(sshd:session): session closed for user core Dec 13 13:36:07.265432 systemd[1]: sshd@24-10.200.8.33:22-10.200.16.10:49632.service: Deactivated successfully. Dec 13 13:36:07.267786 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 13:36:07.269603 systemd-logind[1691]: Session 27 logged out. Waiting for processes to exit. Dec 13 13:36:07.270589 systemd-logind[1691]: Removed session 27. Dec 13 13:36:12.392050 systemd[1]: Started sshd@25-10.200.8.33:22-10.200.16.10:54232.service - OpenSSH per-connection server daemon (10.200.16.10:54232). Dec 13 13:36:13.101116 sshd[6873]: Accepted publickey for core from 10.200.16.10 port 54232 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:36:13.102710 sshd-session[6873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:36:13.107557 systemd-logind[1691]: New session 28 of user core. Dec 13 13:36:13.114905 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 13:36:13.660921 sshd[6875]: Connection closed by 10.200.16.10 port 54232 Dec 13 13:36:13.661857 sshd-session[6873]: pam_unix(sshd:session): session closed for user core Dec 13 13:36:13.665293 systemd[1]: sshd@25-10.200.8.33:22-10.200.16.10:54232.service: Deactivated successfully. Dec 13 13:36:13.667617 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 13:36:13.669119 systemd-logind[1691]: Session 28 logged out. Waiting for processes to exit. Dec 13 13:36:13.670228 systemd-logind[1691]: Removed session 28. Dec 13 13:36:18.791119 systemd[1]: Started sshd@26-10.200.8.33:22-10.200.16.10:56880.service - OpenSSH per-connection server daemon (10.200.16.10:56880). Dec 13 13:36:19.502269 sshd[6907]: Accepted publickey for core from 10.200.16.10 port 56880 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:36:19.504016 sshd-session[6907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:36:19.508906 systemd-logind[1691]: New session 29 of user core. Dec 13 13:36:19.516952 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 13 13:36:20.062553 sshd[6912]: Connection closed by 10.200.16.10 port 56880 Dec 13 13:36:20.063486 sshd-session[6907]: pam_unix(sshd:session): session closed for user core Dec 13 13:36:20.068354 systemd[1]: sshd@26-10.200.8.33:22-10.200.16.10:56880.service: Deactivated successfully. Dec 13 13:36:20.072976 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 13:36:20.074664 systemd-logind[1691]: Session 29 logged out. Waiting for processes to exit. Dec 13 13:36:20.076223 systemd-logind[1691]: Removed session 29. Dec 13 13:36:25.194061 systemd[1]: Started sshd@27-10.200.8.33:22-10.200.16.10:56894.service - OpenSSH per-connection server daemon (10.200.16.10:56894). Dec 13 13:36:25.902621 sshd[6944]: Accepted publickey for core from 10.200.16.10 port 56894 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:36:25.904502 sshd-session[6944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:36:25.908826 systemd-logind[1691]: New session 30 of user core. Dec 13 13:36:25.913900 systemd[1]: Started session-30.scope - Session 30 of User core. Dec 13 13:36:26.463842 sshd[6946]: Connection closed by 10.200.16.10 port 56894 Dec 13 13:36:26.464961 sshd-session[6944]: pam_unix(sshd:session): session closed for user core Dec 13 13:36:26.470052 systemd[1]: sshd@27-10.200.8.33:22-10.200.16.10:56894.service: Deactivated successfully. Dec 13 13:36:26.473384 systemd[1]: session-30.scope: Deactivated successfully. Dec 13 13:36:26.474217 systemd-logind[1691]: Session 30 logged out. Waiting for processes to exit. Dec 13 13:36:26.475220 systemd-logind[1691]: Removed session 30. Dec 13 13:36:31.595054 systemd[1]: Started sshd@28-10.200.8.33:22-10.200.16.10:37048.service - OpenSSH per-connection server daemon (10.200.16.10:37048). Dec 13 13:36:32.304207 sshd[6962]: Accepted publickey for core from 10.200.16.10 port 37048 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:36:32.305823 sshd-session[6962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:36:32.310737 systemd-logind[1691]: New session 31 of user core. Dec 13 13:36:32.315912 systemd[1]: Started session-31.scope - Session 31 of User core. Dec 13 13:36:32.867560 sshd[6964]: Connection closed by 10.200.16.10 port 37048 Dec 13 13:36:32.868593 sshd-session[6962]: pam_unix(sshd:session): session closed for user core Dec 13 13:36:32.873426 systemd-logind[1691]: Session 31 logged out. Waiting for processes to exit. Dec 13 13:36:32.875086 systemd[1]: sshd@28-10.200.8.33:22-10.200.16.10:37048.service: Deactivated successfully. Dec 13 13:36:32.877950 systemd[1]: session-31.scope: Deactivated successfully. Dec 13 13:36:32.878879 systemd-logind[1691]: Removed session 31. Dec 13 13:36:38.003077 systemd[1]: Started sshd@29-10.200.8.33:22-10.200.16.10:37060.service - OpenSSH per-connection server daemon (10.200.16.10:37060). Dec 13 13:36:38.712396 sshd[6987]: Accepted publickey for core from 10.200.16.10 port 37060 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:36:38.714121 sshd-session[6987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:36:38.719216 systemd-logind[1691]: New session 32 of user core. Dec 13 13:36:38.724929 systemd[1]: Started session-32.scope - Session 32 of User core. Dec 13 13:36:39.271876 sshd[6989]: Connection closed by 10.200.16.10 port 37060 Dec 13 13:36:39.272817 sshd-session[6987]: pam_unix(sshd:session): session closed for user core Dec 13 13:36:39.277675 systemd[1]: sshd@29-10.200.8.33:22-10.200.16.10:37060.service: Deactivated successfully. Dec 13 13:36:39.280107 systemd[1]: session-32.scope: Deactivated successfully. Dec 13 13:36:39.281141 systemd-logind[1691]: Session 32 logged out. Waiting for processes to exit. Dec 13 13:36:39.282127 systemd-logind[1691]: Removed session 32. Dec 13 13:36:44.404104 systemd[1]: Started sshd@30-10.200.8.33:22-10.200.16.10:52382.service - OpenSSH per-connection server daemon (10.200.16.10:52382). Dec 13 13:36:45.114268 sshd[7000]: Accepted publickey for core from 10.200.16.10 port 52382 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:36:45.115937 sshd-session[7000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:36:45.120225 systemd-logind[1691]: New session 33 of user core. Dec 13 13:36:45.128899 systemd[1]: Started session-33.scope - Session 33 of User core. Dec 13 13:36:45.670852 sshd[7024]: Connection closed by 10.200.16.10 port 52382 Dec 13 13:36:45.673012 sshd-session[7000]: pam_unix(sshd:session): session closed for user core Dec 13 13:36:45.677252 systemd[1]: sshd@30-10.200.8.33:22-10.200.16.10:52382.service: Deactivated successfully. Dec 13 13:36:45.680232 systemd[1]: session-33.scope: Deactivated successfully. Dec 13 13:36:45.681071 systemd-logind[1691]: Session 33 logged out. Waiting for processes to exit. Dec 13 13:36:45.682170 systemd-logind[1691]: Removed session 33. Dec 13 13:36:50.804341 systemd[1]: Started sshd@31-10.200.8.33:22-10.200.16.10:37642.service - OpenSSH per-connection server daemon (10.200.16.10:37642). Dec 13 13:36:51.511508 sshd[7035]: Accepted publickey for core from 10.200.16.10 port 37642 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:36:51.513115 sshd-session[7035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:36:51.518177 systemd-logind[1691]: New session 34 of user core. Dec 13 13:36:51.525928 systemd[1]: Started session-34.scope - Session 34 of User core. Dec 13 13:36:52.070481 sshd[7037]: Connection closed by 10.200.16.10 port 37642 Dec 13 13:36:52.071416 sshd-session[7035]: pam_unix(sshd:session): session closed for user core Dec 13 13:36:52.075909 systemd[1]: sshd@31-10.200.8.33:22-10.200.16.10:37642.service: Deactivated successfully. Dec 13 13:36:52.078316 systemd[1]: session-34.scope: Deactivated successfully. Dec 13 13:36:52.079136 systemd-logind[1691]: Session 34 logged out. Waiting for processes to exit. Dec 13 13:36:52.080313 systemd-logind[1691]: Removed session 34.