Jul 2 00:20:54.080279 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 22:47:51 -00 2024 Jul 2 00:20:54.080317 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:20:54.080332 kernel: BIOS-provided physical RAM map: Jul 2 00:20:54.080343 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 2 00:20:54.080354 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jul 2 00:20:54.080364 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jul 2 00:20:54.080378 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jul 2 00:20:54.080393 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jul 2 00:20:54.080404 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jul 2 00:20:54.080413 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jul 2 00:20:54.080422 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jul 2 00:20:54.080432 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jul 2 00:20:54.080442 kernel: printk: bootconsole [earlyser0] enabled Jul 2 00:20:54.080453 kernel: NX (Execute Disable) protection: active Jul 2 00:20:54.080469 kernel: APIC: Static calls initialized Jul 2 00:20:54.080479 kernel: efi: EFI v2.7 by Microsoft Jul 2 00:20:54.080493 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee75a98 Jul 2 00:20:54.080503 kernel: SMBIOS 3.1.0 present. Jul 2 00:20:54.080515 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jul 2 00:20:54.080527 kernel: Hypervisor detected: Microsoft Hyper-V Jul 2 00:20:54.080539 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jul 2 00:20:54.080551 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jul 2 00:20:54.080563 kernel: Hyper-V: Nested features: 0x1e0101 Jul 2 00:20:54.080574 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jul 2 00:20:54.080590 kernel: Hyper-V: Using hypercall for remote TLB flush Jul 2 00:20:54.080601 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 2 00:20:54.080614 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 2 00:20:54.080628 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jul 2 00:20:54.080641 kernel: tsc: Detected 2593.905 MHz processor Jul 2 00:20:54.080655 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 00:20:54.080668 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 00:20:54.080680 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jul 2 00:20:54.080694 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 2 00:20:54.080712 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 00:20:54.080725 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jul 2 00:20:54.080737 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jul 2 00:20:54.080751 kernel: Using GB pages for direct mapping Jul 2 00:20:54.080764 kernel: Secure boot disabled Jul 2 00:20:54.080777 kernel: ACPI: Early table checksum verification disabled Jul 2 00:20:54.080790 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jul 2 00:20:54.080810 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:20:54.080827 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:20:54.080841 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jul 2 00:20:54.080854 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jul 2 00:20:54.080868 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:20:54.080883 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:20:54.080896 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:20:54.080914 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:20:54.080928 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:20:54.080942 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:20:54.080956 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:20:54.080969 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jul 2 00:20:54.080983 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jul 2 00:20:54.080997 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jul 2 00:20:54.081011 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jul 2 00:20:54.081028 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jul 2 00:20:54.081042 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jul 2 00:20:54.081055 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jul 2 00:20:54.081069 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jul 2 00:20:54.081083 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jul 2 00:20:54.081096 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jul 2 00:20:54.081109 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 00:20:54.081123 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 2 00:20:54.081137 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jul 2 00:20:54.081155 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jul 2 00:20:54.081183 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jul 2 00:20:54.081197 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jul 2 00:20:54.081211 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jul 2 00:20:54.081224 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jul 2 00:20:54.081238 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jul 2 00:20:54.081251 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jul 2 00:20:54.081265 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jul 2 00:20:54.081278 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jul 2 00:20:54.081297 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jul 2 00:20:54.081311 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jul 2 00:20:54.081325 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jul 2 00:20:54.081339 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jul 2 00:20:54.081353 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jul 2 00:20:54.081366 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jul 2 00:20:54.081380 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jul 2 00:20:54.081394 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jul 2 00:20:54.081408 kernel: Zone ranges: Jul 2 00:20:54.081426 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 00:20:54.081439 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 2 00:20:54.081453 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jul 2 00:20:54.081467 kernel: Movable zone start for each node Jul 2 00:20:54.081481 kernel: Early memory node ranges Jul 2 00:20:54.081495 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 2 00:20:54.081509 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jul 2 00:20:54.081522 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jul 2 00:20:54.081536 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jul 2 00:20:54.081553 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jul 2 00:20:54.081566 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 00:20:54.081580 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 2 00:20:54.081594 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jul 2 00:20:54.081608 kernel: ACPI: PM-Timer IO Port: 0x408 Jul 2 00:20:54.081622 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jul 2 00:20:54.081636 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jul 2 00:20:54.081649 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 00:20:54.081663 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 00:20:54.081681 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jul 2 00:20:54.081695 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 00:20:54.081709 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jul 2 00:20:54.081722 kernel: Booting paravirtualized kernel on Hyper-V Jul 2 00:20:54.081736 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 00:20:54.081751 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 2 00:20:54.081763 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jul 2 00:20:54.081778 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jul 2 00:20:54.081791 kernel: pcpu-alloc: [0] 0 1 Jul 2 00:20:54.081807 kernel: Hyper-V: PV spinlocks enabled Jul 2 00:20:54.081821 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 00:20:54.081837 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:20:54.081851 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:20:54.081864 kernel: random: crng init done Jul 2 00:20:54.081877 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 2 00:20:54.081891 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 00:20:54.081905 kernel: Fallback order for Node 0: 0 Jul 2 00:20:54.081922 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jul 2 00:20:54.081948 kernel: Policy zone: Normal Jul 2 00:20:54.081961 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:20:54.081977 kernel: software IO TLB: area num 2. Jul 2 00:20:54.081991 kernel: Memory: 8070932K/8387460K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49328K init, 2016K bss, 316268K reserved, 0K cma-reserved) Jul 2 00:20:54.082005 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 00:20:54.082017 kernel: ftrace: allocating 37658 entries in 148 pages Jul 2 00:20:54.082031 kernel: ftrace: allocated 148 pages with 3 groups Jul 2 00:20:54.082044 kernel: Dynamic Preempt: voluntary Jul 2 00:20:54.082057 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:20:54.082073 kernel: rcu: RCU event tracing is enabled. Jul 2 00:20:54.082090 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 00:20:54.082104 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:20:54.082120 kernel: Rude variant of Tasks RCU enabled. Jul 2 00:20:54.082135 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:20:54.082150 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:20:54.082183 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 00:20:54.082196 kernel: Using NULL legacy PIC Jul 2 00:20:54.082229 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jul 2 00:20:54.082246 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 00:20:54.082258 kernel: Console: colour dummy device 80x25 Jul 2 00:20:54.082272 kernel: printk: console [tty1] enabled Jul 2 00:20:54.082286 kernel: printk: console [ttyS0] enabled Jul 2 00:20:54.082299 kernel: printk: bootconsole [earlyser0] disabled Jul 2 00:20:54.082313 kernel: ACPI: Core revision 20230628 Jul 2 00:20:54.082324 kernel: Failed to register legacy timer interrupt Jul 2 00:20:54.082341 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 00:20:54.082355 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 2 00:20:54.082369 kernel: Hyper-V: Using IPI hypercalls Jul 2 00:20:54.082381 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jul 2 00:20:54.082393 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jul 2 00:20:54.082405 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jul 2 00:20:54.082419 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jul 2 00:20:54.082431 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jul 2 00:20:54.082442 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jul 2 00:20:54.082468 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Jul 2 00:20:54.082481 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 2 00:20:54.082494 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jul 2 00:20:54.082505 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 00:20:54.082518 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 00:20:54.082531 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 00:20:54.082544 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 00:20:54.082556 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 2 00:20:54.082568 kernel: RETBleed: Vulnerable Jul 2 00:20:54.082585 kernel: Speculative Store Bypass: Vulnerable Jul 2 00:20:54.082597 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 00:20:54.082610 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 00:20:54.082624 kernel: GDS: Unknown: Dependent on hypervisor status Jul 2 00:20:54.082636 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 00:20:54.082648 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 00:20:54.082663 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 00:20:54.082677 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 2 00:20:54.082691 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 2 00:20:54.082705 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 2 00:20:54.082720 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 00:20:54.082737 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jul 2 00:20:54.082751 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jul 2 00:20:54.082766 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jul 2 00:20:54.082780 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jul 2 00:20:54.082794 kernel: Freeing SMP alternatives memory: 32K Jul 2 00:20:54.082808 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:20:54.082822 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 00:20:54.082836 kernel: SELinux: Initializing. Jul 2 00:20:54.082851 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 00:20:54.082865 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 00:20:54.082880 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 2 00:20:54.082894 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:20:54.082912 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:20:54.082926 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:20:54.082940 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 2 00:20:54.082955 kernel: signal: max sigframe size: 3632 Jul 2 00:20:54.082969 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:20:54.082985 kernel: rcu: Max phase no-delay instances is 400. Jul 2 00:20:54.082999 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 00:20:54.083014 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:20:54.083028 kernel: smpboot: x86: Booting SMP configuration: Jul 2 00:20:54.083045 kernel: .... node #0, CPUs: #1 Jul 2 00:20:54.083060 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jul 2 00:20:54.083076 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 2 00:20:54.083090 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 00:20:54.083105 kernel: smpboot: Max logical packages: 1 Jul 2 00:20:54.083120 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jul 2 00:20:54.083135 kernel: devtmpfs: initialized Jul 2 00:20:54.083149 kernel: x86/mm: Memory block size: 128MB Jul 2 00:20:54.089440 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jul 2 00:20:54.089469 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:20:54.089484 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 00:20:54.089497 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:20:54.089511 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:20:54.089526 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:20:54.089539 kernel: audit: type=2000 audit(1719879653.027:1): state=initialized audit_enabled=0 res=1 Jul 2 00:20:54.089554 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:20:54.089567 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 00:20:54.089590 kernel: cpuidle: using governor menu Jul 2 00:20:54.089604 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:20:54.089618 kernel: dca service started, version 1.12.1 Jul 2 00:20:54.089632 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jul 2 00:20:54.089647 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 00:20:54.089661 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 00:20:54.089675 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 00:20:54.089689 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:20:54.089703 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 00:20:54.089719 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:20:54.089733 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:20:54.089746 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:20:54.089760 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:20:54.089773 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:20:54.089788 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 2 00:20:54.089802 kernel: ACPI: Interpreter enabled Jul 2 00:20:54.089817 kernel: ACPI: PM: (supports S0 S5) Jul 2 00:20:54.089831 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 00:20:54.089848 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 00:20:54.089863 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 2 00:20:54.089877 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jul 2 00:20:54.089890 kernel: iommu: Default domain type: Translated Jul 2 00:20:54.089904 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 00:20:54.089919 kernel: efivars: Registered efivars operations Jul 2 00:20:54.089932 kernel: PCI: Using ACPI for IRQ routing Jul 2 00:20:54.089947 kernel: PCI: System does not support PCI Jul 2 00:20:54.089961 kernel: vgaarb: loaded Jul 2 00:20:54.089978 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jul 2 00:20:54.089991 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:20:54.090005 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:20:54.090018 kernel: pnp: PnP ACPI init Jul 2 00:20:54.090032 kernel: pnp: PnP ACPI: found 3 devices Jul 2 00:20:54.090046 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 00:20:54.090060 kernel: NET: Registered PF_INET protocol family Jul 2 00:20:54.090076 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 00:20:54.090091 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 2 00:20:54.090107 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:20:54.090120 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 00:20:54.090132 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 2 00:20:54.090147 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 2 00:20:54.090161 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 2 00:20:54.090190 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 2 00:20:54.090203 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:20:54.090217 kernel: NET: Registered PF_XDP protocol family Jul 2 00:20:54.090231 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:20:54.090249 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 2 00:20:54.090263 kernel: software IO TLB: mapped [mem 0x000000003ae75000-0x000000003ee75000] (64MB) Jul 2 00:20:54.090278 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 00:20:54.090291 kernel: Initialise system trusted keyrings Jul 2 00:20:54.090305 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 2 00:20:54.090319 kernel: Key type asymmetric registered Jul 2 00:20:54.090334 kernel: Asymmetric key parser 'x509' registered Jul 2 00:20:54.090347 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 2 00:20:54.090361 kernel: io scheduler mq-deadline registered Jul 2 00:20:54.090377 kernel: io scheduler kyber registered Jul 2 00:20:54.090390 kernel: io scheduler bfq registered Jul 2 00:20:54.090404 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 00:20:54.090419 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:20:54.090433 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 00:20:54.090448 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 2 00:20:54.090463 kernel: i8042: PNP: No PS/2 controller found. Jul 2 00:20:54.090667 kernel: rtc_cmos 00:02: registered as rtc0 Jul 2 00:20:54.090800 kernel: rtc_cmos 00:02: setting system clock to 2024-07-02T00:20:53 UTC (1719879653) Jul 2 00:20:54.090917 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jul 2 00:20:54.090936 kernel: intel_pstate: CPU model not supported Jul 2 00:20:54.090950 kernel: efifb: probing for efifb Jul 2 00:20:54.090964 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 2 00:20:54.090979 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 2 00:20:54.090994 kernel: efifb: scrolling: redraw Jul 2 00:20:54.091009 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 2 00:20:54.091028 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 00:20:54.091043 kernel: fb0: EFI VGA frame buffer device Jul 2 00:20:54.091057 kernel: pstore: Using crash dump compression: deflate Jul 2 00:20:54.091071 kernel: pstore: Registered efi_pstore as persistent store backend Jul 2 00:20:54.091086 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:20:54.091100 kernel: Segment Routing with IPv6 Jul 2 00:20:54.091116 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:20:54.091131 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:20:54.091146 kernel: Key type dns_resolver registered Jul 2 00:20:54.091159 kernel: IPI shorthand broadcast: enabled Jul 2 00:20:54.091244 kernel: sched_clock: Marking stable (782002300, 44248800)->(1017865100, -191614000) Jul 2 00:20:54.091257 kernel: registered taskstats version 1 Jul 2 00:20:54.091269 kernel: Loading compiled-in X.509 certificates Jul 2 00:20:54.091284 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: be1ede902d88b56c26cc000ff22391c78349d771' Jul 2 00:20:54.091297 kernel: Key type .fscrypt registered Jul 2 00:20:54.091309 kernel: Key type fscrypt-provisioning registered Jul 2 00:20:54.091327 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:20:54.091340 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:20:54.091358 kernel: ima: No architecture policies found Jul 2 00:20:54.091371 kernel: clk: Disabling unused clocks Jul 2 00:20:54.091386 kernel: Freeing unused kernel image (initmem) memory: 49328K Jul 2 00:20:54.091400 kernel: Write protecting the kernel read-only data: 36864k Jul 2 00:20:54.091415 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Jul 2 00:20:54.091429 kernel: Run /init as init process Jul 2 00:20:54.091443 kernel: with arguments: Jul 2 00:20:54.091457 kernel: /init Jul 2 00:20:54.091471 kernel: with environment: Jul 2 00:20:54.091489 kernel: HOME=/ Jul 2 00:20:54.091503 kernel: TERM=linux Jul 2 00:20:54.091517 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:20:54.091535 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:20:54.091554 systemd[1]: Detected virtualization microsoft. Jul 2 00:20:54.091568 systemd[1]: Detected architecture x86-64. Jul 2 00:20:54.091583 systemd[1]: Running in initrd. Jul 2 00:20:54.091598 systemd[1]: No hostname configured, using default hostname. Jul 2 00:20:54.091617 systemd[1]: Hostname set to . Jul 2 00:20:54.091633 systemd[1]: Initializing machine ID from random generator. Jul 2 00:20:54.091648 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:20:54.091663 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:20:54.091677 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:20:54.091693 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 00:20:54.091708 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:20:54.091724 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 00:20:54.091742 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 00:20:54.091758 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 00:20:54.091774 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 00:20:54.091788 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:20:54.091803 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:20:54.091818 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:20:54.091832 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:20:54.091850 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:20:54.091864 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:20:54.091878 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:20:54.091894 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:20:54.091908 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:20:54.091923 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:20:54.091938 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:20:54.091952 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:20:54.091970 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:20:54.091984 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:20:54.091998 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 00:20:54.092013 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:20:54.092028 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 00:20:54.092042 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:20:54.092057 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:20:54.092071 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:20:54.092086 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:20:54.092128 systemd-journald[176]: Collecting audit messages is disabled. Jul 2 00:20:54.092161 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 00:20:54.092188 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:20:54.092203 systemd-journald[176]: Journal started Jul 2 00:20:54.092241 systemd-journald[176]: Runtime Journal (/run/log/journal/b4c55f749fed48f582b8437582c5dda7) is 8.0M, max 158.8M, 150.8M free. Jul 2 00:20:54.082500 systemd-modules-load[177]: Inserted module 'overlay' Jul 2 00:20:54.106532 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:20:54.107217 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:20:54.111716 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:20:54.134538 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:20:54.138474 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:20:54.142346 kernel: Bridge firewalling registered Jul 2 00:20:54.142541 systemd-modules-load[177]: Inserted module 'br_netfilter' Jul 2 00:20:54.146387 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:20:54.153339 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:20:54.153843 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:20:54.159330 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:20:54.160371 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:20:54.168015 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:20:54.187491 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:20:54.198468 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 00:20:54.203693 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:20:54.206594 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:20:54.216221 dracut-cmdline[205]: dracut-dracut-053 Jul 2 00:20:54.221003 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:20:54.238155 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:20:54.252365 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:20:54.298283 systemd-resolved[250]: Positive Trust Anchors: Jul 2 00:20:54.298300 systemd-resolved[250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:20:54.298360 systemd-resolved[250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:20:54.301797 systemd-resolved[250]: Defaulting to hostname 'linux'. Jul 2 00:20:54.334403 kernel: SCSI subsystem initialized Jul 2 00:20:54.303450 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:20:54.308521 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:20:54.347187 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:20:54.360190 kernel: iscsi: registered transport (tcp) Jul 2 00:20:54.386385 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:20:54.386485 kernel: QLogic iSCSI HBA Driver Jul 2 00:20:54.421773 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 00:20:54.432344 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 00:20:54.463676 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:20:54.463755 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:20:54.466875 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 00:20:54.511193 kernel: raid6: avx512x4 gen() 18544 MB/s Jul 2 00:20:54.530201 kernel: raid6: avx512x2 gen() 18486 MB/s Jul 2 00:20:54.548186 kernel: raid6: avx512x1 gen() 18143 MB/s Jul 2 00:20:54.567191 kernel: raid6: avx2x4 gen() 18414 MB/s Jul 2 00:20:54.585182 kernel: raid6: avx2x2 gen() 18312 MB/s Jul 2 00:20:54.605138 kernel: raid6: avx2x1 gen() 14176 MB/s Jul 2 00:20:54.605206 kernel: raid6: using algorithm avx512x4 gen() 18544 MB/s Jul 2 00:20:54.626215 kernel: raid6: .... xor() 8091 MB/s, rmw enabled Jul 2 00:20:54.626249 kernel: raid6: using avx512x2 recovery algorithm Jul 2 00:20:54.653192 kernel: xor: automatically using best checksumming function avx Jul 2 00:20:54.823196 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 00:20:54.832918 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:20:54.844415 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:20:54.855971 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jul 2 00:20:54.860261 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:20:54.874560 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 00:20:54.885603 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jul 2 00:20:54.910602 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:20:54.922339 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:20:54.961944 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:20:54.970425 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 00:20:54.985844 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 00:20:54.993824 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:20:55.002281 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:20:55.002363 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:20:55.010444 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 00:20:55.027909 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:20:55.064200 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 00:20:55.078118 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:20:55.078354 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:20:55.081621 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:20:55.093772 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:20:55.094029 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:20:55.097026 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:20:55.113185 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 00:20:55.113230 kernel: AES CTR mode by8 optimization enabled Jul 2 00:20:55.116193 kernel: hv_vmbus: Vmbus version:5.2 Jul 2 00:20:55.118628 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:20:55.127482 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:20:55.131211 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:20:55.140547 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:20:55.159350 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 00:20:55.159392 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 00:20:55.173453 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:20:55.180029 kernel: hv_vmbus: registering driver hv_netvsc Jul 2 00:20:55.188349 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:20:55.200387 kernel: PTP clock support registered Jul 2 00:20:55.206469 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 2 00:20:55.207302 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 2 00:20:55.233760 kernel: hv_utils: Registering HyperV Utility Driver Jul 2 00:20:55.233830 kernel: hv_vmbus: registering driver hv_utils Jul 2 00:20:55.236304 kernel: hv_utils: Shutdown IC version 3.2 Jul 2 00:20:55.238375 kernel: hv_utils: Heartbeat IC version 3.0 Jul 2 00:20:55.974033 kernel: hv_utils: TimeSync IC version 4.0 Jul 2 00:20:55.973407 systemd-resolved[250]: Clock change detected. Flushing caches. Jul 2 00:20:55.976665 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:20:55.987167 kernel: hv_vmbus: registering driver hv_storvsc Jul 2 00:20:55.987187 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 00:20:55.989541 kernel: scsi host1: storvsc_host_t Jul 2 00:20:55.989792 kernel: scsi host0: storvsc_host_t Jul 2 00:20:55.991941 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 2 00:20:55.999452 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 2 00:20:56.007450 kernel: hv_vmbus: registering driver hid_hyperv Jul 2 00:20:56.014261 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 2 00:20:56.014316 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 2 00:20:56.025672 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 2 00:20:56.027209 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 00:20:56.027233 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 2 00:20:56.037886 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 2 00:20:56.055181 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 2 00:20:56.055386 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 2 00:20:56.055569 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 2 00:20:56.055731 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 2 00:20:56.055905 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 00:20:56.055927 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 2 00:20:56.134708 kernel: hv_netvsc 0022489b-bd4e-0022-489b-bd4e0022489b eth0: VF slot 1 added Jul 2 00:20:56.143453 kernel: hv_vmbus: registering driver hv_pci Jul 2 00:20:56.147449 kernel: hv_pci 801b16fd-fa4c-4906-bc48-38472f96f11a: PCI VMBus probing: Using version 0x10004 Jul 2 00:20:56.192476 kernel: hv_pci 801b16fd-fa4c-4906-bc48-38472f96f11a: PCI host bridge to bus fa4c:00 Jul 2 00:20:56.192720 kernel: pci_bus fa4c:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jul 2 00:20:56.192935 kernel: pci_bus fa4c:00: No busn resource found for root bus, will use [bus 00-ff] Jul 2 00:20:56.193141 kernel: pci fa4c:00:02.0: [15b3:1016] type 00 class 0x020000 Jul 2 00:20:56.193393 kernel: pci fa4c:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jul 2 00:20:56.193625 kernel: pci fa4c:00:02.0: enabling Extended Tags Jul 2 00:20:56.193858 kernel: pci fa4c:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at fa4c:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 2 00:20:56.194081 kernel: pci_bus fa4c:00: busn_res: [bus 00-ff] end is updated to 00 Jul 2 00:20:56.194277 kernel: pci fa4c:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jul 2 00:20:56.383564 kernel: mlx5_core fa4c:00:02.0: enabling device (0000 -> 0002) Jul 2 00:20:56.647934 kernel: mlx5_core fa4c:00:02.0: firmware version: 14.30.1284 Jul 2 00:20:56.648163 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (455) Jul 2 00:20:56.648186 kernel: BTRFS: device fsid 2fd636b8-f582-46f8-bde2-15e56e3958c1 devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (445) Jul 2 00:20:56.648206 kernel: hv_netvsc 0022489b-bd4e-0022-489b-bd4e0022489b eth0: VF registering: eth1 Jul 2 00:20:56.648372 kernel: mlx5_core fa4c:00:02.0 eth1: joined to eth0 Jul 2 00:20:56.648578 kernel: mlx5_core fa4c:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 2 00:20:56.537077 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 2 00:20:56.598242 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 2 00:20:56.615487 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 2 00:20:56.663113 kernel: mlx5_core fa4c:00:02.0 enP64076s1: renamed from eth1 Jul 2 00:20:56.640111 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 2 00:20:56.643057 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 2 00:20:56.657173 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 00:20:56.678457 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 00:20:57.692547 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 00:20:57.694054 disk-uuid[600]: The operation has completed successfully. Jul 2 00:20:57.786776 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:20:57.786900 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 00:20:57.801622 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 00:20:57.804836 sh[713]: Success Jul 2 00:20:57.837458 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 00:20:58.039373 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 00:20:58.051695 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 00:20:58.056500 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 00:20:58.072467 kernel: BTRFS info (device dm-0): first mount of filesystem 2fd636b8-f582-46f8-bde2-15e56e3958c1 Jul 2 00:20:58.072520 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:20:58.077679 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 00:20:58.080269 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 00:20:58.082839 kernel: BTRFS info (device dm-0): using free space tree Jul 2 00:20:58.459798 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 00:20:58.465306 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 00:20:58.480573 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 00:20:58.485984 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 00:20:58.498752 kernel: BTRFS info (device sda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:20:58.498811 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:20:58.501283 kernel: BTRFS info (device sda6): using free space tree Jul 2 00:20:58.537455 kernel: BTRFS info (device sda6): auto enabling async discard Jul 2 00:20:58.552792 kernel: BTRFS info (device sda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:20:58.552393 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:20:58.563544 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 00:20:58.577632 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 00:20:58.583407 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:20:58.595619 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:20:58.614319 systemd-networkd[897]: lo: Link UP Jul 2 00:20:58.614329 systemd-networkd[897]: lo: Gained carrier Jul 2 00:20:58.616474 systemd-networkd[897]: Enumeration completed Jul 2 00:20:58.616706 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:20:58.618958 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:20:58.618963 systemd-networkd[897]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:20:58.623510 systemd[1]: Reached target network.target - Network. Jul 2 00:20:58.689456 kernel: mlx5_core fa4c:00:02.0 enP64076s1: Link up Jul 2 00:20:58.721466 kernel: hv_netvsc 0022489b-bd4e-0022-489b-bd4e0022489b eth0: Data path switched to VF: enP64076s1 Jul 2 00:20:58.722148 systemd-networkd[897]: enP64076s1: Link UP Jul 2 00:20:58.722266 systemd-networkd[897]: eth0: Link UP Jul 2 00:20:58.722495 systemd-networkd[897]: eth0: Gained carrier Jul 2 00:20:58.722506 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:20:58.728655 systemd-networkd[897]: enP64076s1: Gained carrier Jul 2 00:20:58.768496 systemd-networkd[897]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 2 00:20:59.648069 ignition[892]: Ignition 2.18.0 Jul 2 00:20:59.648082 ignition[892]: Stage: fetch-offline Jul 2 00:20:59.648138 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:20:59.648149 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:20:59.648332 ignition[892]: parsed url from cmdline: "" Jul 2 00:20:59.648338 ignition[892]: no config URL provided Jul 2 00:20:59.648345 ignition[892]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:20:59.648355 ignition[892]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:20:59.648362 ignition[892]: failed to fetch config: resource requires networking Jul 2 00:20:59.650001 ignition[892]: Ignition finished successfully Jul 2 00:20:59.668814 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:20:59.677744 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 00:20:59.692698 ignition[906]: Ignition 2.18.0 Jul 2 00:20:59.692708 ignition[906]: Stage: fetch Jul 2 00:20:59.692925 ignition[906]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:20:59.692938 ignition[906]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:20:59.693023 ignition[906]: parsed url from cmdline: "" Jul 2 00:20:59.693026 ignition[906]: no config URL provided Jul 2 00:20:59.693031 ignition[906]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:20:59.693037 ignition[906]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:20:59.693062 ignition[906]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 2 00:20:59.788284 ignition[906]: GET result: OK Jul 2 00:20:59.788470 ignition[906]: config has been read from IMDS userdata Jul 2 00:20:59.788503 ignition[906]: parsing config with SHA512: a8e1d8f286d7fed3b01d42a223e78403844768fd88be838066251974de92c5a5d3ccb96e913e96ad0d637ece3311caf1add6b7d628fa70eee99b3fd0dd31c4a0 Jul 2 00:20:59.793556 unknown[906]: fetched base config from "system" Jul 2 00:20:59.793571 unknown[906]: fetched base config from "system" Jul 2 00:20:59.793935 ignition[906]: fetch: fetch complete Jul 2 00:20:59.793579 unknown[906]: fetched user config from "azure" Jul 2 00:20:59.793940 ignition[906]: fetch: fetch passed Jul 2 00:20:59.795753 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 00:20:59.793984 ignition[906]: Ignition finished successfully Jul 2 00:20:59.812618 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 00:20:59.828427 ignition[913]: Ignition 2.18.0 Jul 2 00:20:59.828453 ignition[913]: Stage: kargs Jul 2 00:20:59.828666 ignition[913]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:20:59.828676 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:20:59.829555 ignition[913]: kargs: kargs passed Jul 2 00:20:59.829598 ignition[913]: Ignition finished successfully Jul 2 00:20:59.840372 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 00:20:59.847585 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 00:20:59.860645 systemd-networkd[897]: enP64076s1: Gained IPv6LL Jul 2 00:20:59.861101 ignition[920]: Ignition 2.18.0 Jul 2 00:20:59.861106 ignition[920]: Stage: disks Jul 2 00:20:59.863143 ignition[920]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:20:59.863159 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:20:59.871239 ignition[920]: disks: disks passed Jul 2 00:20:59.871292 ignition[920]: Ignition finished successfully Jul 2 00:20:59.872078 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 00:20:59.875817 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 00:20:59.879727 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:20:59.890733 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:20:59.895478 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:20:59.901458 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:20:59.909589 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 00:20:59.987498 systemd-fsck[929]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 2 00:20:59.994409 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 00:21:00.002913 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 00:21:00.114459 kernel: EXT4-fs (sda9): mounted filesystem c5a17c06-b440-4aab-a0fa-5b60bb1d8586 r/w with ordered data mode. Quota mode: none. Jul 2 00:21:00.115137 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 00:21:00.117710 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 00:21:00.156596 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:21:00.162078 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 00:21:00.170509 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (940) Jul 2 00:21:00.171709 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 2 00:21:00.184793 kernel: BTRFS info (device sda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:21:00.184841 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:21:00.184860 kernel: BTRFS info (device sda6): using free space tree Jul 2 00:21:00.184974 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:21:00.185089 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:21:00.195756 kernel: BTRFS info (device sda6): auto enabling async discard Jul 2 00:21:00.199122 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:21:00.200745 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 00:21:00.217582 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 00:21:00.756687 systemd-networkd[897]: eth0: Gained IPv6LL Jul 2 00:21:00.841939 coreos-metadata[942]: Jul 02 00:21:00.841 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 2 00:21:00.847915 coreos-metadata[942]: Jul 02 00:21:00.847 INFO Fetch successful Jul 2 00:21:00.850510 coreos-metadata[942]: Jul 02 00:21:00.848 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 2 00:21:00.860285 coreos-metadata[942]: Jul 02 00:21:00.860 INFO Fetch successful Jul 2 00:21:00.875276 coreos-metadata[942]: Jul 02 00:21:00.875 INFO wrote hostname ci-3975.1.1-a-7b42818af6 to /sysroot/etc/hostname Jul 2 00:21:00.877066 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 00:21:01.116874 initrd-setup-root[971]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:21:01.152277 initrd-setup-root[978]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:21:01.157403 initrd-setup-root[985]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:21:01.162186 initrd-setup-root[992]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:21:01.991361 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 00:21:02.000564 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 00:21:02.007627 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 00:21:02.014469 kernel: BTRFS info (device sda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:21:02.017216 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 00:21:02.052483 ignition[1060]: INFO : Ignition 2.18.0 Jul 2 00:21:02.052483 ignition[1060]: INFO : Stage: mount Jul 2 00:21:02.052483 ignition[1060]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:21:02.052483 ignition[1060]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:21:02.066688 ignition[1060]: INFO : mount: mount passed Jul 2 00:21:02.066688 ignition[1060]: INFO : Ignition finished successfully Jul 2 00:21:02.055840 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 00:21:02.076558 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 00:21:02.080388 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 00:21:02.094638 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:21:02.104445 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1072) Jul 2 00:21:02.115446 kernel: BTRFS info (device sda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:21:02.115481 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:21:02.119777 kernel: BTRFS info (device sda6): using free space tree Jul 2 00:21:02.125452 kernel: BTRFS info (device sda6): auto enabling async discard Jul 2 00:21:02.126780 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:21:02.148776 ignition[1088]: INFO : Ignition 2.18.0 Jul 2 00:21:02.148776 ignition[1088]: INFO : Stage: files Jul 2 00:21:02.152759 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:21:02.152759 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:21:02.152759 ignition[1088]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:21:02.165332 ignition[1088]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:21:02.165332 ignition[1088]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:21:02.270486 ignition[1088]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:21:02.274230 ignition[1088]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:21:02.277872 unknown[1088]: wrote ssh authorized keys file for user: core Jul 2 00:21:02.280808 ignition[1088]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:21:02.309408 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:21:02.314421 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 00:21:02.670500 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 00:21:02.772097 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:21:02.777322 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:21:02.777322 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:21:02.777322 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:21:02.777322 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:21:02.777322 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:21:02.777322 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:21:02.777322 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:21:02.777322 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:21:02.777322 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:21:02.777322 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:21:02.777322 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 00:21:02.777322 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 00:21:02.777322 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 00:21:02.777322 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jul 2 00:21:03.327039 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 2 00:21:03.633417 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 00:21:03.633417 ignition[1088]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 2 00:21:03.664518 ignition[1088]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:21:03.669556 ignition[1088]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:21:03.669556 ignition[1088]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 2 00:21:03.677104 ignition[1088]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:21:03.677104 ignition[1088]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:21:03.683748 ignition[1088]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:21:03.687877 ignition[1088]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:21:03.691737 ignition[1088]: INFO : files: files passed Jul 2 00:21:03.696015 ignition[1088]: INFO : Ignition finished successfully Jul 2 00:21:03.692883 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 00:21:03.703608 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 00:21:03.709987 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 00:21:03.713131 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:21:03.715100 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 00:21:03.725688 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:21:03.725688 initrd-setup-root-after-ignition[1118]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:21:03.738647 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:21:03.729196 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:21:03.732460 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 00:21:03.748516 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 00:21:03.784521 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:21:03.784640 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 00:21:03.790209 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 00:21:03.795076 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 00:21:03.797721 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 00:21:03.807604 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 00:21:03.820291 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:21:03.829588 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 00:21:03.842366 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:21:03.845029 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:21:03.853197 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 00:21:03.855389 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:21:03.855513 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:21:03.860940 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 00:21:03.870205 systemd[1]: Stopped target basic.target - Basic System. Jul 2 00:21:03.872391 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 00:21:03.876913 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:21:03.882147 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 00:21:03.889776 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 00:21:03.889940 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:21:03.890328 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 00:21:03.890689 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 00:21:03.891054 systemd[1]: Stopped target swap.target - Swaps. Jul 2 00:21:03.891424 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:21:03.891656 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:21:03.892242 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:21:03.892748 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:21:03.893079 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 00:21:03.911770 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:21:03.914947 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:21:03.915097 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 00:21:03.920324 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:21:03.920484 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:21:03.927887 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:21:03.928025 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 00:21:03.932366 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 2 00:21:03.932519 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 00:21:03.953507 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 00:21:03.984943 ignition[1142]: INFO : Ignition 2.18.0 Jul 2 00:21:03.984943 ignition[1142]: INFO : Stage: umount Jul 2 00:21:03.984943 ignition[1142]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:21:03.984943 ignition[1142]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:21:03.984943 ignition[1142]: INFO : umount: umount passed Jul 2 00:21:03.984943 ignition[1142]: INFO : Ignition finished successfully Jul 2 00:21:03.964616 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 00:21:03.966790 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:21:03.967271 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:21:03.974323 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:21:03.974550 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:21:03.988402 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:21:03.988650 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 00:21:04.014194 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:21:04.014540 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 00:21:04.019907 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:21:04.019974 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 00:21:04.025163 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:21:04.025263 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 00:21:04.029473 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 00:21:04.029519 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 00:21:04.034644 systemd[1]: Stopped target network.target - Network. Jul 2 00:21:04.036642 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:21:04.036696 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:21:04.039554 systemd[1]: Stopped target paths.target - Path Units. Jul 2 00:21:04.041644 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:21:04.047923 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:21:04.056313 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 00:21:04.076466 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 00:21:04.078681 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:21:04.078729 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:21:04.085653 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:21:04.085702 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:21:04.094703 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:21:04.094775 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 00:21:04.099260 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 00:21:04.099311 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 00:21:04.105217 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 00:21:04.109797 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 00:21:04.118765 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:21:04.124483 systemd-networkd[897]: eth0: DHCPv6 lease lost Jul 2 00:21:04.124546 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:21:04.124666 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 00:21:04.128362 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 00:21:04.128424 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:21:04.142741 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:21:04.142866 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 00:21:04.147907 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:21:04.147945 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:21:04.163617 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 00:21:04.163702 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:21:04.163749 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:21:04.163818 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:21:04.163854 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:21:04.164441 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:21:04.164471 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 00:21:04.167440 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:21:04.193773 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:21:04.194841 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:21:04.211196 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:21:04.211264 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 00:21:04.219085 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:21:04.219135 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:21:04.234618 kernel: hv_netvsc 0022489b-bd4e-0022-489b-bd4e0022489b eth0: Data path switched from VF: enP64076s1 Jul 2 00:21:04.223981 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:21:04.224073 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:21:04.231080 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:21:04.231126 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 00:21:04.234697 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:21:04.234757 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:21:04.245598 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 00:21:04.253314 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:21:04.253366 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:21:04.258553 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 2 00:21:04.258608 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:21:04.275583 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:21:04.275644 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:21:04.283178 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:21:04.283237 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:21:04.288503 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:21:04.288604 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 00:21:04.295783 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:21:04.295865 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 00:21:04.708190 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:21:04.708322 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 00:21:04.708762 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 00:21:04.708956 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:21:04.709002 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 00:21:04.727736 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 00:21:05.021209 systemd[1]: Switching root. Jul 2 00:21:05.140517 systemd-journald[176]: Journal stopped Jul 2 00:20:54.080279 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 22:47:51 -00 2024 Jul 2 00:20:54.080317 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:20:54.080332 kernel: BIOS-provided physical RAM map: Jul 2 00:20:54.080343 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 2 00:20:54.080354 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jul 2 00:20:54.080364 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jul 2 00:20:54.080378 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jul 2 00:20:54.080393 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jul 2 00:20:54.080404 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jul 2 00:20:54.080413 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jul 2 00:20:54.080422 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jul 2 00:20:54.080432 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jul 2 00:20:54.080442 kernel: printk: bootconsole [earlyser0] enabled Jul 2 00:20:54.080453 kernel: NX (Execute Disable) protection: active Jul 2 00:20:54.080469 kernel: APIC: Static calls initialized Jul 2 00:20:54.080479 kernel: efi: EFI v2.7 by Microsoft Jul 2 00:20:54.080493 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee75a98 Jul 2 00:20:54.080503 kernel: SMBIOS 3.1.0 present. Jul 2 00:20:54.080515 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jul 2 00:20:54.080527 kernel: Hypervisor detected: Microsoft Hyper-V Jul 2 00:20:54.080539 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jul 2 00:20:54.080551 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jul 2 00:20:54.080563 kernel: Hyper-V: Nested features: 0x1e0101 Jul 2 00:20:54.080574 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jul 2 00:20:54.080590 kernel: Hyper-V: Using hypercall for remote TLB flush Jul 2 00:20:54.080601 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 2 00:20:54.080614 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 2 00:20:54.080628 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jul 2 00:20:54.080641 kernel: tsc: Detected 2593.905 MHz processor Jul 2 00:20:54.080655 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 00:20:54.080668 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 00:20:54.080680 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jul 2 00:20:54.080694 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 2 00:20:54.080712 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 00:20:54.080725 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jul 2 00:20:54.080737 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jul 2 00:20:54.080751 kernel: Using GB pages for direct mapping Jul 2 00:20:54.080764 kernel: Secure boot disabled Jul 2 00:20:54.080777 kernel: ACPI: Early table checksum verification disabled Jul 2 00:20:54.080790 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jul 2 00:20:54.080810 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:20:54.080827 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:20:54.080841 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jul 2 00:20:54.080854 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jul 2 00:20:54.080868 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:20:54.080883 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:20:54.080896 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:20:54.080914 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:20:54.080928 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:20:54.080942 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:20:54.080956 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:20:54.080969 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jul 2 00:20:54.080983 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jul 2 00:20:54.080997 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jul 2 00:20:54.081011 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jul 2 00:20:54.081028 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jul 2 00:20:54.081042 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jul 2 00:20:54.081055 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jul 2 00:20:54.081069 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jul 2 00:20:54.081083 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jul 2 00:20:54.081096 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jul 2 00:20:54.081109 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 00:20:54.081123 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 2 00:20:54.081137 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jul 2 00:20:54.081155 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jul 2 00:20:54.081183 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jul 2 00:20:54.081197 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jul 2 00:20:54.081211 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jul 2 00:20:54.081224 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jul 2 00:20:54.081238 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jul 2 00:20:54.081251 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jul 2 00:20:54.081265 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jul 2 00:20:54.081278 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jul 2 00:20:54.081297 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jul 2 00:20:54.081311 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jul 2 00:20:54.081325 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jul 2 00:20:54.081339 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jul 2 00:20:54.081353 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jul 2 00:20:54.081366 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jul 2 00:20:54.081380 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jul 2 00:20:54.081394 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jul 2 00:20:54.081408 kernel: Zone ranges: Jul 2 00:20:54.081426 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 00:20:54.081439 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 2 00:20:54.081453 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jul 2 00:20:54.081467 kernel: Movable zone start for each node Jul 2 00:20:54.081481 kernel: Early memory node ranges Jul 2 00:20:54.081495 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 2 00:20:54.081509 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jul 2 00:20:54.081522 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jul 2 00:20:54.081536 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jul 2 00:20:54.081553 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jul 2 00:20:54.081566 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 00:20:54.081580 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 2 00:20:54.081594 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jul 2 00:20:54.081608 kernel: ACPI: PM-Timer IO Port: 0x408 Jul 2 00:20:54.081622 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jul 2 00:20:54.081636 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jul 2 00:20:54.081649 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 00:20:54.081663 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 00:20:54.081681 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jul 2 00:20:54.081695 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 00:20:54.081709 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jul 2 00:20:54.081722 kernel: Booting paravirtualized kernel on Hyper-V Jul 2 00:20:54.081736 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 00:20:54.081751 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 2 00:20:54.081763 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jul 2 00:20:54.081778 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jul 2 00:20:54.081791 kernel: pcpu-alloc: [0] 0 1 Jul 2 00:20:54.081807 kernel: Hyper-V: PV spinlocks enabled Jul 2 00:20:54.081821 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 00:20:54.081837 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:20:54.081851 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:20:54.081864 kernel: random: crng init done Jul 2 00:20:54.081877 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 2 00:20:54.081891 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 00:20:54.081905 kernel: Fallback order for Node 0: 0 Jul 2 00:20:54.081922 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jul 2 00:20:54.081948 kernel: Policy zone: Normal Jul 2 00:20:54.081961 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:20:54.081977 kernel: software IO TLB: area num 2. Jul 2 00:20:54.081991 kernel: Memory: 8070932K/8387460K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49328K init, 2016K bss, 316268K reserved, 0K cma-reserved) Jul 2 00:20:54.082005 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 00:20:54.082017 kernel: ftrace: allocating 37658 entries in 148 pages Jul 2 00:20:54.082031 kernel: ftrace: allocated 148 pages with 3 groups Jul 2 00:20:54.082044 kernel: Dynamic Preempt: voluntary Jul 2 00:20:54.082057 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:20:54.082073 kernel: rcu: RCU event tracing is enabled. Jul 2 00:20:54.082090 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 00:20:54.082104 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:20:54.082120 kernel: Rude variant of Tasks RCU enabled. Jul 2 00:20:54.082135 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:20:54.082150 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:20:54.082183 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 00:20:54.082196 kernel: Using NULL legacy PIC Jul 2 00:20:54.082229 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jul 2 00:20:54.082246 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 00:20:54.082258 kernel: Console: colour dummy device 80x25 Jul 2 00:20:54.082272 kernel: printk: console [tty1] enabled Jul 2 00:20:54.082286 kernel: printk: console [ttyS0] enabled Jul 2 00:20:54.082299 kernel: printk: bootconsole [earlyser0] disabled Jul 2 00:20:54.082313 kernel: ACPI: Core revision 20230628 Jul 2 00:20:54.082324 kernel: Failed to register legacy timer interrupt Jul 2 00:20:54.082341 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 00:20:54.082355 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 2 00:20:54.082369 kernel: Hyper-V: Using IPI hypercalls Jul 2 00:20:54.082381 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jul 2 00:20:54.082393 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jul 2 00:20:54.082405 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jul 2 00:20:54.082419 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jul 2 00:20:54.082431 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jul 2 00:20:54.082442 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jul 2 00:20:54.082468 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Jul 2 00:20:54.082481 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 2 00:20:54.082494 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jul 2 00:20:54.082505 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 00:20:54.082518 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 00:20:54.082531 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 00:20:54.082544 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 00:20:54.082556 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 2 00:20:54.082568 kernel: RETBleed: Vulnerable Jul 2 00:20:54.082585 kernel: Speculative Store Bypass: Vulnerable Jul 2 00:20:54.082597 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 00:20:54.082610 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 00:20:54.082624 kernel: GDS: Unknown: Dependent on hypervisor status Jul 2 00:20:54.082636 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 00:20:54.082648 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 00:20:54.082663 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 00:20:54.082677 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 2 00:20:54.082691 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 2 00:20:54.082705 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 2 00:20:54.082720 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 00:20:54.082737 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jul 2 00:20:54.082751 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jul 2 00:20:54.082766 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jul 2 00:20:54.082780 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jul 2 00:20:54.082794 kernel: Freeing SMP alternatives memory: 32K Jul 2 00:20:54.082808 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:20:54.082822 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 00:20:54.082836 kernel: SELinux: Initializing. Jul 2 00:20:54.082851 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 00:20:54.082865 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 00:20:54.082880 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 2 00:20:54.082894 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:20:54.082912 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:20:54.082926 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:20:54.082940 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 2 00:20:54.082955 kernel: signal: max sigframe size: 3632 Jul 2 00:20:54.082969 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:20:54.082985 kernel: rcu: Max phase no-delay instances is 400. Jul 2 00:20:54.082999 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 00:20:54.083014 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:20:54.083028 kernel: smpboot: x86: Booting SMP configuration: Jul 2 00:20:54.083045 kernel: .... node #0, CPUs: #1 Jul 2 00:20:54.083060 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jul 2 00:20:54.083076 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 2 00:20:54.083090 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 00:20:54.083105 kernel: smpboot: Max logical packages: 1 Jul 2 00:20:54.083120 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jul 2 00:20:54.083135 kernel: devtmpfs: initialized Jul 2 00:20:54.083149 kernel: x86/mm: Memory block size: 128MB Jul 2 00:20:54.089440 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jul 2 00:20:54.089469 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:20:54.089484 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 00:20:54.089497 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:20:54.089511 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:20:54.089526 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:20:54.089539 kernel: audit: type=2000 audit(1719879653.027:1): state=initialized audit_enabled=0 res=1 Jul 2 00:20:54.089554 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:20:54.089567 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 00:20:54.089590 kernel: cpuidle: using governor menu Jul 2 00:20:54.089604 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:20:54.089618 kernel: dca service started, version 1.12.1 Jul 2 00:20:54.089632 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jul 2 00:20:54.089647 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 00:20:54.089661 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 00:20:54.089675 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 00:20:54.089689 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:20:54.089703 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 00:20:54.089719 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:20:54.089733 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:20:54.089746 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:20:54.089760 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:20:54.089773 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:20:54.089788 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 2 00:20:54.089802 kernel: ACPI: Interpreter enabled Jul 2 00:20:54.089817 kernel: ACPI: PM: (supports S0 S5) Jul 2 00:20:54.089831 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 00:20:54.089848 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 00:20:54.089863 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 2 00:20:54.089877 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jul 2 00:20:54.089890 kernel: iommu: Default domain type: Translated Jul 2 00:20:54.089904 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 00:20:54.089919 kernel: efivars: Registered efivars operations Jul 2 00:20:54.089932 kernel: PCI: Using ACPI for IRQ routing Jul 2 00:20:54.089947 kernel: PCI: System does not support PCI Jul 2 00:20:54.089961 kernel: vgaarb: loaded Jul 2 00:20:54.089978 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jul 2 00:20:54.089991 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:20:54.090005 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:20:54.090018 kernel: pnp: PnP ACPI init Jul 2 00:20:54.090032 kernel: pnp: PnP ACPI: found 3 devices Jul 2 00:20:54.090046 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 00:20:54.090060 kernel: NET: Registered PF_INET protocol family Jul 2 00:20:54.090076 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 00:20:54.090091 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 2 00:20:54.090107 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:20:54.090120 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 00:20:54.090132 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 2 00:20:54.090147 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 2 00:20:54.090161 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 2 00:20:54.090190 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 2 00:20:54.090203 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:20:54.090217 kernel: NET: Registered PF_XDP protocol family Jul 2 00:20:54.090231 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:20:54.090249 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 2 00:20:54.090263 kernel: software IO TLB: mapped [mem 0x000000003ae75000-0x000000003ee75000] (64MB) Jul 2 00:20:54.090278 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 00:20:54.090291 kernel: Initialise system trusted keyrings Jul 2 00:20:54.090305 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 2 00:20:54.090319 kernel: Key type asymmetric registered Jul 2 00:20:54.090334 kernel: Asymmetric key parser 'x509' registered Jul 2 00:20:54.090347 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 2 00:20:54.090361 kernel: io scheduler mq-deadline registered Jul 2 00:20:54.090377 kernel: io scheduler kyber registered Jul 2 00:20:54.090390 kernel: io scheduler bfq registered Jul 2 00:20:54.090404 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 00:20:54.090419 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:20:54.090433 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 00:20:54.090448 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 2 00:20:54.090463 kernel: i8042: PNP: No PS/2 controller found. Jul 2 00:20:54.090667 kernel: rtc_cmos 00:02: registered as rtc0 Jul 2 00:20:54.090800 kernel: rtc_cmos 00:02: setting system clock to 2024-07-02T00:20:53 UTC (1719879653) Jul 2 00:20:54.090917 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jul 2 00:20:54.090936 kernel: intel_pstate: CPU model not supported Jul 2 00:20:54.090950 kernel: efifb: probing for efifb Jul 2 00:20:54.090964 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 2 00:20:54.090979 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 2 00:20:54.090994 kernel: efifb: scrolling: redraw Jul 2 00:20:54.091009 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 2 00:20:54.091028 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 00:20:54.091043 kernel: fb0: EFI VGA frame buffer device Jul 2 00:20:54.091057 kernel: pstore: Using crash dump compression: deflate Jul 2 00:20:54.091071 kernel: pstore: Registered efi_pstore as persistent store backend Jul 2 00:20:54.091086 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:20:54.091100 kernel: Segment Routing with IPv6 Jul 2 00:20:54.091116 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:20:54.091131 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:20:54.091146 kernel: Key type dns_resolver registered Jul 2 00:20:54.091159 kernel: IPI shorthand broadcast: enabled Jul 2 00:20:54.091244 kernel: sched_clock: Marking stable (782002300, 44248800)->(1017865100, -191614000) Jul 2 00:20:54.091257 kernel: registered taskstats version 1 Jul 2 00:20:54.091269 kernel: Loading compiled-in X.509 certificates Jul 2 00:20:54.091284 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: be1ede902d88b56c26cc000ff22391c78349d771' Jul 2 00:20:54.091297 kernel: Key type .fscrypt registered Jul 2 00:20:54.091309 kernel: Key type fscrypt-provisioning registered Jul 2 00:20:54.091327 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:20:54.091340 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:20:54.091358 kernel: ima: No architecture policies found Jul 2 00:20:54.091371 kernel: clk: Disabling unused clocks Jul 2 00:20:54.091386 kernel: Freeing unused kernel image (initmem) memory: 49328K Jul 2 00:20:54.091400 kernel: Write protecting the kernel read-only data: 36864k Jul 2 00:20:54.091415 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Jul 2 00:20:54.091429 kernel: Run /init as init process Jul 2 00:20:54.091443 kernel: with arguments: Jul 2 00:20:54.091457 kernel: /init Jul 2 00:20:54.091471 kernel: with environment: Jul 2 00:20:54.091489 kernel: HOME=/ Jul 2 00:20:54.091503 kernel: TERM=linux Jul 2 00:20:54.091517 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:20:54.091535 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:20:54.091554 systemd[1]: Detected virtualization microsoft. Jul 2 00:20:54.091568 systemd[1]: Detected architecture x86-64. Jul 2 00:20:54.091583 systemd[1]: Running in initrd. Jul 2 00:20:54.091598 systemd[1]: No hostname configured, using default hostname. Jul 2 00:20:54.091617 systemd[1]: Hostname set to . Jul 2 00:20:54.091633 systemd[1]: Initializing machine ID from random generator. Jul 2 00:20:54.091648 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:20:54.091663 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:20:54.091677 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:20:54.091693 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 00:20:54.091708 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:20:54.091724 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 00:20:54.091742 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 00:20:54.091758 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 00:20:54.091774 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 00:20:54.091788 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:20:54.091803 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:20:54.091818 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:20:54.091832 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:20:54.091850 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:20:54.091864 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:20:54.091878 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:20:54.091894 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:20:54.091908 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:20:54.091923 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:20:54.091938 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:20:54.091952 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:20:54.091970 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:20:54.091984 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:20:54.091998 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 00:20:54.092013 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:20:54.092028 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 00:20:54.092042 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:20:54.092057 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:20:54.092071 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:20:54.092086 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:20:54.092128 systemd-journald[176]: Collecting audit messages is disabled. Jul 2 00:20:54.092161 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 00:20:54.092188 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:20:54.092203 systemd-journald[176]: Journal started Jul 2 00:20:54.092241 systemd-journald[176]: Runtime Journal (/run/log/journal/b4c55f749fed48f582b8437582c5dda7) is 8.0M, max 158.8M, 150.8M free. Jul 2 00:20:54.082500 systemd-modules-load[177]: Inserted module 'overlay' Jul 2 00:20:54.106532 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:20:54.107217 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:20:54.111716 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:20:54.134538 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:20:54.138474 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:20:54.142346 kernel: Bridge firewalling registered Jul 2 00:20:54.142541 systemd-modules-load[177]: Inserted module 'br_netfilter' Jul 2 00:20:54.146387 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:20:54.153339 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:20:54.153843 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:20:54.159330 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:20:54.160371 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:20:54.168015 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:20:54.187491 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:20:54.198468 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 00:20:54.203693 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:20:54.206594 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:20:54.216221 dracut-cmdline[205]: dracut-dracut-053 Jul 2 00:20:54.221003 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:20:54.238155 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:20:54.252365 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:20:54.298283 systemd-resolved[250]: Positive Trust Anchors: Jul 2 00:20:54.298300 systemd-resolved[250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:20:54.298360 systemd-resolved[250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:20:54.301797 systemd-resolved[250]: Defaulting to hostname 'linux'. Jul 2 00:20:54.334403 kernel: SCSI subsystem initialized Jul 2 00:20:54.303450 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:20:54.308521 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:20:54.347187 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:20:54.360190 kernel: iscsi: registered transport (tcp) Jul 2 00:20:54.386385 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:20:54.386485 kernel: QLogic iSCSI HBA Driver Jul 2 00:20:54.421773 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 00:20:54.432344 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 00:20:54.463676 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:20:54.463755 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:20:54.466875 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 00:20:54.511193 kernel: raid6: avx512x4 gen() 18544 MB/s Jul 2 00:20:54.530201 kernel: raid6: avx512x2 gen() 18486 MB/s Jul 2 00:20:54.548186 kernel: raid6: avx512x1 gen() 18143 MB/s Jul 2 00:20:54.567191 kernel: raid6: avx2x4 gen() 18414 MB/s Jul 2 00:20:54.585182 kernel: raid6: avx2x2 gen() 18312 MB/s Jul 2 00:20:54.605138 kernel: raid6: avx2x1 gen() 14176 MB/s Jul 2 00:20:54.605206 kernel: raid6: using algorithm avx512x4 gen() 18544 MB/s Jul 2 00:20:54.626215 kernel: raid6: .... xor() 8091 MB/s, rmw enabled Jul 2 00:20:54.626249 kernel: raid6: using avx512x2 recovery algorithm Jul 2 00:20:54.653192 kernel: xor: automatically using best checksumming function avx Jul 2 00:20:54.823196 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 00:20:54.832918 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:20:54.844415 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:20:54.855971 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jul 2 00:20:54.860261 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:20:54.874560 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 00:20:54.885603 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jul 2 00:20:54.910602 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:20:54.922339 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:20:54.961944 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:20:54.970425 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 00:20:54.985844 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 00:20:54.993824 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:20:55.002281 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:20:55.002363 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:20:55.010444 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 00:20:55.027909 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:20:55.064200 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 00:20:55.078118 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:20:55.078354 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:20:55.081621 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:20:55.093772 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:20:55.094029 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:20:55.097026 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:20:55.113185 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 00:20:55.113230 kernel: AES CTR mode by8 optimization enabled Jul 2 00:20:55.116193 kernel: hv_vmbus: Vmbus version:5.2 Jul 2 00:20:55.118628 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:20:55.127482 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:20:55.131211 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:20:55.140547 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:20:55.159350 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 00:20:55.159392 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 00:20:55.173453 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:20:55.180029 kernel: hv_vmbus: registering driver hv_netvsc Jul 2 00:20:55.188349 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:20:55.200387 kernel: PTP clock support registered Jul 2 00:20:55.206469 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 2 00:20:55.207302 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 2 00:20:55.233760 kernel: hv_utils: Registering HyperV Utility Driver Jul 2 00:20:55.233830 kernel: hv_vmbus: registering driver hv_utils Jul 2 00:20:55.236304 kernel: hv_utils: Shutdown IC version 3.2 Jul 2 00:20:55.238375 kernel: hv_utils: Heartbeat IC version 3.0 Jul 2 00:20:55.974033 kernel: hv_utils: TimeSync IC version 4.0 Jul 2 00:20:55.973407 systemd-resolved[250]: Clock change detected. Flushing caches. Jul 2 00:20:55.976665 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:20:55.987167 kernel: hv_vmbus: registering driver hv_storvsc Jul 2 00:20:55.987187 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 00:20:55.989541 kernel: scsi host1: storvsc_host_t Jul 2 00:20:55.989792 kernel: scsi host0: storvsc_host_t Jul 2 00:20:55.991941 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 2 00:20:55.999452 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 2 00:20:56.007450 kernel: hv_vmbus: registering driver hid_hyperv Jul 2 00:20:56.014261 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 2 00:20:56.014316 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 2 00:20:56.025672 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 2 00:20:56.027209 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 00:20:56.027233 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 2 00:20:56.037886 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 2 00:20:56.055181 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 2 00:20:56.055386 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 2 00:20:56.055569 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 2 00:20:56.055731 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 2 00:20:56.055905 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 00:20:56.055927 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 2 00:20:56.134708 kernel: hv_netvsc 0022489b-bd4e-0022-489b-bd4e0022489b eth0: VF slot 1 added Jul 2 00:20:56.143453 kernel: hv_vmbus: registering driver hv_pci Jul 2 00:20:56.147449 kernel: hv_pci 801b16fd-fa4c-4906-bc48-38472f96f11a: PCI VMBus probing: Using version 0x10004 Jul 2 00:20:56.192476 kernel: hv_pci 801b16fd-fa4c-4906-bc48-38472f96f11a: PCI host bridge to bus fa4c:00 Jul 2 00:20:56.192720 kernel: pci_bus fa4c:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jul 2 00:20:56.192935 kernel: pci_bus fa4c:00: No busn resource found for root bus, will use [bus 00-ff] Jul 2 00:20:56.193141 kernel: pci fa4c:00:02.0: [15b3:1016] type 00 class 0x020000 Jul 2 00:20:56.193393 kernel: pci fa4c:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jul 2 00:20:56.193625 kernel: pci fa4c:00:02.0: enabling Extended Tags Jul 2 00:20:56.193858 kernel: pci fa4c:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at fa4c:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 2 00:20:56.194081 kernel: pci_bus fa4c:00: busn_res: [bus 00-ff] end is updated to 00 Jul 2 00:20:56.194277 kernel: pci fa4c:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jul 2 00:20:56.383564 kernel: mlx5_core fa4c:00:02.0: enabling device (0000 -> 0002) Jul 2 00:20:56.647934 kernel: mlx5_core fa4c:00:02.0: firmware version: 14.30.1284 Jul 2 00:20:56.648163 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (455) Jul 2 00:20:56.648186 kernel: BTRFS: device fsid 2fd636b8-f582-46f8-bde2-15e56e3958c1 devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (445) Jul 2 00:20:56.648206 kernel: hv_netvsc 0022489b-bd4e-0022-489b-bd4e0022489b eth0: VF registering: eth1 Jul 2 00:20:56.648372 kernel: mlx5_core fa4c:00:02.0 eth1: joined to eth0 Jul 2 00:20:56.648578 kernel: mlx5_core fa4c:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 2 00:20:56.537077 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 2 00:20:56.598242 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 2 00:20:56.615487 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 2 00:20:56.663113 kernel: mlx5_core fa4c:00:02.0 enP64076s1: renamed from eth1 Jul 2 00:20:56.640111 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 2 00:20:56.643057 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 2 00:20:56.657173 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 00:20:56.678457 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 00:20:57.692547 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 00:20:57.694054 disk-uuid[600]: The operation has completed successfully. Jul 2 00:20:57.786776 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:20:57.786900 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 00:20:57.801622 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 00:20:57.804836 sh[713]: Success Jul 2 00:20:57.837458 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 00:20:58.039373 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 00:20:58.051695 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 00:20:58.056500 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 00:20:58.072467 kernel: BTRFS info (device dm-0): first mount of filesystem 2fd636b8-f582-46f8-bde2-15e56e3958c1 Jul 2 00:20:58.072520 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:20:58.077679 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 00:20:58.080269 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 00:20:58.082839 kernel: BTRFS info (device dm-0): using free space tree Jul 2 00:20:58.459798 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 00:20:58.465306 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 00:20:58.480573 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 00:20:58.485984 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 00:20:58.498752 kernel: BTRFS info (device sda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:20:58.498811 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:20:58.501283 kernel: BTRFS info (device sda6): using free space tree Jul 2 00:20:58.537455 kernel: BTRFS info (device sda6): auto enabling async discard Jul 2 00:20:58.552792 kernel: BTRFS info (device sda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:20:58.552393 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:20:58.563544 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 00:20:58.577632 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 00:20:58.583407 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:20:58.595619 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:20:58.614319 systemd-networkd[897]: lo: Link UP Jul 2 00:20:58.614329 systemd-networkd[897]: lo: Gained carrier Jul 2 00:20:58.616474 systemd-networkd[897]: Enumeration completed Jul 2 00:20:58.616706 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:20:58.618958 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:20:58.618963 systemd-networkd[897]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:20:58.623510 systemd[1]: Reached target network.target - Network. Jul 2 00:20:58.689456 kernel: mlx5_core fa4c:00:02.0 enP64076s1: Link up Jul 2 00:20:58.721466 kernel: hv_netvsc 0022489b-bd4e-0022-489b-bd4e0022489b eth0: Data path switched to VF: enP64076s1 Jul 2 00:20:58.722148 systemd-networkd[897]: enP64076s1: Link UP Jul 2 00:20:58.722266 systemd-networkd[897]: eth0: Link UP Jul 2 00:20:58.722495 systemd-networkd[897]: eth0: Gained carrier Jul 2 00:20:58.722506 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:20:58.728655 systemd-networkd[897]: enP64076s1: Gained carrier Jul 2 00:20:58.768496 systemd-networkd[897]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 2 00:20:59.648069 ignition[892]: Ignition 2.18.0 Jul 2 00:20:59.648082 ignition[892]: Stage: fetch-offline Jul 2 00:20:59.648138 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:20:59.648149 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:20:59.648332 ignition[892]: parsed url from cmdline: "" Jul 2 00:20:59.648338 ignition[892]: no config URL provided Jul 2 00:20:59.648345 ignition[892]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:20:59.648355 ignition[892]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:20:59.648362 ignition[892]: failed to fetch config: resource requires networking Jul 2 00:20:59.650001 ignition[892]: Ignition finished successfully Jul 2 00:20:59.668814 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:20:59.677744 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 00:20:59.692698 ignition[906]: Ignition 2.18.0 Jul 2 00:20:59.692708 ignition[906]: Stage: fetch Jul 2 00:20:59.692925 ignition[906]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:20:59.692938 ignition[906]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:20:59.693023 ignition[906]: parsed url from cmdline: "" Jul 2 00:20:59.693026 ignition[906]: no config URL provided Jul 2 00:20:59.693031 ignition[906]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:20:59.693037 ignition[906]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:20:59.693062 ignition[906]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 2 00:20:59.788284 ignition[906]: GET result: OK Jul 2 00:20:59.788470 ignition[906]: config has been read from IMDS userdata Jul 2 00:20:59.788503 ignition[906]: parsing config with SHA512: a8e1d8f286d7fed3b01d42a223e78403844768fd88be838066251974de92c5a5d3ccb96e913e96ad0d637ece3311caf1add6b7d628fa70eee99b3fd0dd31c4a0 Jul 2 00:20:59.793556 unknown[906]: fetched base config from "system" Jul 2 00:20:59.793571 unknown[906]: fetched base config from "system" Jul 2 00:20:59.793935 ignition[906]: fetch: fetch complete Jul 2 00:20:59.793579 unknown[906]: fetched user config from "azure" Jul 2 00:20:59.793940 ignition[906]: fetch: fetch passed Jul 2 00:20:59.795753 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 00:20:59.793984 ignition[906]: Ignition finished successfully Jul 2 00:20:59.812618 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 00:20:59.828427 ignition[913]: Ignition 2.18.0 Jul 2 00:20:59.828453 ignition[913]: Stage: kargs Jul 2 00:20:59.828666 ignition[913]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:20:59.828676 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:20:59.829555 ignition[913]: kargs: kargs passed Jul 2 00:20:59.829598 ignition[913]: Ignition finished successfully Jul 2 00:20:59.840372 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 00:20:59.847585 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 00:20:59.860645 systemd-networkd[897]: enP64076s1: Gained IPv6LL Jul 2 00:20:59.861101 ignition[920]: Ignition 2.18.0 Jul 2 00:20:59.861106 ignition[920]: Stage: disks Jul 2 00:20:59.863143 ignition[920]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:20:59.863159 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:20:59.871239 ignition[920]: disks: disks passed Jul 2 00:20:59.871292 ignition[920]: Ignition finished successfully Jul 2 00:20:59.872078 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 00:20:59.875817 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 00:20:59.879727 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:20:59.890733 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:20:59.895478 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:20:59.901458 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:20:59.909589 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 00:20:59.987498 systemd-fsck[929]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 2 00:20:59.994409 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 00:21:00.002913 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 00:21:00.114459 kernel: EXT4-fs (sda9): mounted filesystem c5a17c06-b440-4aab-a0fa-5b60bb1d8586 r/w with ordered data mode. Quota mode: none. Jul 2 00:21:00.115137 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 00:21:00.117710 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 00:21:00.156596 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:21:00.162078 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 00:21:00.170509 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (940) Jul 2 00:21:00.171709 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 2 00:21:00.184793 kernel: BTRFS info (device sda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:21:00.184841 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:21:00.184860 kernel: BTRFS info (device sda6): using free space tree Jul 2 00:21:00.184974 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:21:00.185089 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:21:00.195756 kernel: BTRFS info (device sda6): auto enabling async discard Jul 2 00:21:00.199122 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:21:00.200745 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 00:21:00.217582 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 00:21:00.756687 systemd-networkd[897]: eth0: Gained IPv6LL Jul 2 00:21:00.841939 coreos-metadata[942]: Jul 02 00:21:00.841 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 2 00:21:00.847915 coreos-metadata[942]: Jul 02 00:21:00.847 INFO Fetch successful Jul 2 00:21:00.850510 coreos-metadata[942]: Jul 02 00:21:00.848 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 2 00:21:00.860285 coreos-metadata[942]: Jul 02 00:21:00.860 INFO Fetch successful Jul 2 00:21:00.875276 coreos-metadata[942]: Jul 02 00:21:00.875 INFO wrote hostname ci-3975.1.1-a-7b42818af6 to /sysroot/etc/hostname Jul 2 00:21:00.877066 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 00:21:01.116874 initrd-setup-root[971]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:21:01.152277 initrd-setup-root[978]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:21:01.157403 initrd-setup-root[985]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:21:01.162186 initrd-setup-root[992]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:21:01.991361 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 00:21:02.000564 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 00:21:02.007627 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 00:21:02.014469 kernel: BTRFS info (device sda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:21:02.017216 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 00:21:02.052483 ignition[1060]: INFO : Ignition 2.18.0 Jul 2 00:21:02.052483 ignition[1060]: INFO : Stage: mount Jul 2 00:21:02.052483 ignition[1060]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:21:02.052483 ignition[1060]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:21:02.066688 ignition[1060]: INFO : mount: mount passed Jul 2 00:21:02.066688 ignition[1060]: INFO : Ignition finished successfully Jul 2 00:21:02.055840 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 00:21:02.076558 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 00:21:02.080388 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 00:21:02.094638 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:21:02.104445 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1072) Jul 2 00:21:02.115446 kernel: BTRFS info (device sda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:21:02.115481 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:21:02.119777 kernel: BTRFS info (device sda6): using free space tree Jul 2 00:21:02.125452 kernel: BTRFS info (device sda6): auto enabling async discard Jul 2 00:21:02.126780 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:21:02.148776 ignition[1088]: INFO : Ignition 2.18.0 Jul 2 00:21:02.148776 ignition[1088]: INFO : Stage: files Jul 2 00:21:02.152759 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:21:02.152759 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:21:02.152759 ignition[1088]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:21:02.165332 ignition[1088]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:21:02.165332 ignition[1088]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:21:02.270486 ignition[1088]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:21:02.274230 ignition[1088]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:21:02.277872 unknown[1088]: wrote ssh authorized keys file for user: core Jul 2 00:21:02.280808 ignition[1088]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:21:02.309408 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:21:02.314421 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 00:21:02.670500 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 00:21:02.772097 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:21:02.777322 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:21:02.777322 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:21:02.777322 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:21:02.777322 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:21:02.777322 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:21:02.777322 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:21:02.777322 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:21:02.777322 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:21:02.777322 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:21:02.777322 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:21:02.777322 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 00:21:02.777322 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 00:21:02.777322 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 00:21:02.777322 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jul 2 00:21:03.327039 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 2 00:21:03.633417 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 00:21:03.633417 ignition[1088]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 2 00:21:03.664518 ignition[1088]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:21:03.669556 ignition[1088]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:21:03.669556 ignition[1088]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 2 00:21:03.677104 ignition[1088]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:21:03.677104 ignition[1088]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:21:03.683748 ignition[1088]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:21:03.687877 ignition[1088]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:21:03.691737 ignition[1088]: INFO : files: files passed Jul 2 00:21:03.696015 ignition[1088]: INFO : Ignition finished successfully Jul 2 00:21:03.692883 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 00:21:03.703608 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 00:21:03.709987 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 00:21:03.713131 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:21:03.715100 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 00:21:03.725688 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:21:03.725688 initrd-setup-root-after-ignition[1118]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:21:03.738647 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:21:03.729196 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:21:03.732460 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 00:21:03.748516 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 00:21:03.784521 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:21:03.784640 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 00:21:03.790209 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 00:21:03.795076 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 00:21:03.797721 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 00:21:03.807604 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 00:21:03.820291 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:21:03.829588 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 00:21:03.842366 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:21:03.845029 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:21:03.853197 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 00:21:03.855389 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:21:03.855513 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:21:03.860940 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 00:21:03.870205 systemd[1]: Stopped target basic.target - Basic System. Jul 2 00:21:03.872391 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 00:21:03.876913 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:21:03.882147 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 00:21:03.889776 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 00:21:03.889940 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:21:03.890328 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 00:21:03.890689 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 00:21:03.891054 systemd[1]: Stopped target swap.target - Swaps. Jul 2 00:21:03.891424 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:21:03.891656 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:21:03.892242 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:21:03.892748 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:21:03.893079 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 00:21:03.911770 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:21:03.914947 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:21:03.915097 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 00:21:03.920324 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:21:03.920484 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:21:03.927887 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:21:03.928025 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 00:21:03.932366 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 2 00:21:03.932519 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 00:21:03.953507 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 00:21:03.984943 ignition[1142]: INFO : Ignition 2.18.0 Jul 2 00:21:03.984943 ignition[1142]: INFO : Stage: umount Jul 2 00:21:03.984943 ignition[1142]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:21:03.984943 ignition[1142]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:21:03.984943 ignition[1142]: INFO : umount: umount passed Jul 2 00:21:03.984943 ignition[1142]: INFO : Ignition finished successfully Jul 2 00:21:03.964616 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 00:21:03.966790 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:21:03.967271 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:21:03.974323 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:21:03.974550 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:21:03.988402 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:21:03.988650 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 00:21:04.014194 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:21:04.014540 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 00:21:04.019907 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:21:04.019974 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 00:21:04.025163 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:21:04.025263 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 00:21:04.029473 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 00:21:04.029519 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 00:21:04.034644 systemd[1]: Stopped target network.target - Network. Jul 2 00:21:04.036642 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:21:04.036696 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:21:04.039554 systemd[1]: Stopped target paths.target - Path Units. Jul 2 00:21:04.041644 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:21:04.047923 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:21:04.056313 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 00:21:04.076466 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 00:21:04.078681 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:21:04.078729 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:21:04.085653 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:21:04.085702 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:21:04.094703 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:21:04.094775 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 00:21:04.099260 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 00:21:04.099311 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 00:21:04.105217 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 00:21:04.109797 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 00:21:04.118765 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:21:04.124483 systemd-networkd[897]: eth0: DHCPv6 lease lost Jul 2 00:21:04.124546 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:21:04.124666 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 00:21:04.128362 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 00:21:04.128424 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:21:04.142741 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:21:04.142866 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 00:21:04.147907 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:21:04.147945 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:21:04.163617 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 00:21:04.163702 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:21:04.163749 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:21:04.163818 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:21:04.163854 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:21:04.164441 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:21:04.164471 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 00:21:04.167440 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:21:04.193773 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:21:04.194841 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:21:04.211196 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:21:04.211264 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 00:21:04.219085 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:21:04.219135 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:21:04.234618 kernel: hv_netvsc 0022489b-bd4e-0022-489b-bd4e0022489b eth0: Data path switched from VF: enP64076s1 Jul 2 00:21:04.223981 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:21:04.224073 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:21:04.231080 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:21:04.231126 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 00:21:04.234697 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:21:04.234757 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:21:04.245598 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 00:21:04.253314 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:21:04.253366 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:21:04.258553 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 2 00:21:04.258608 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:21:04.275583 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:21:04.275644 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:21:04.283178 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:21:04.283237 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:21:04.288503 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:21:04.288604 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 00:21:04.295783 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:21:04.295865 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 00:21:04.708190 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:21:04.708322 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 00:21:04.708762 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 00:21:04.708956 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:21:04.709002 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 00:21:04.727736 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 00:21:05.021209 systemd[1]: Switching root. Jul 2 00:21:05.140517 systemd-journald[176]: Journal stopped Jul 2 00:21:16.546764 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Jul 2 00:21:16.546796 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:21:16.546808 kernel: SELinux: policy capability open_perms=1 Jul 2 00:21:16.546819 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:21:16.546827 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:21:16.546837 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:21:16.546848 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:21:16.546861 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:21:16.546870 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:21:16.546878 kernel: audit: type=1403 audit(1719879666.397:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 00:21:16.546888 systemd[1]: Successfully loaded SELinux policy in 289.304ms. Jul 2 00:21:16.546900 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.982ms. Jul 2 00:21:16.546910 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:21:16.546923 systemd[1]: Detected virtualization microsoft. Jul 2 00:21:16.546935 systemd[1]: Detected architecture x86-64. Jul 2 00:21:16.546947 systemd[1]: Detected first boot. Jul 2 00:21:16.546957 systemd[1]: Hostname set to . Jul 2 00:21:16.546971 systemd[1]: Initializing machine ID from random generator. Jul 2 00:21:16.546981 zram_generator::config[1186]: No configuration found. Jul 2 00:21:16.546996 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:21:16.547008 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 00:21:16.547018 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 00:21:16.547030 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 00:21:16.547040 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 00:21:16.547050 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 00:21:16.547063 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 00:21:16.547075 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 00:21:16.547087 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 00:21:16.547097 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 00:21:16.547110 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 00:21:16.547122 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 00:21:16.547132 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:21:16.547142 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:21:16.547154 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 00:21:16.547167 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 00:21:16.547179 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 00:21:16.547190 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:21:16.547201 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 2 00:21:16.547214 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:21:16.547224 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 00:21:16.547239 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 00:21:16.547250 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 00:21:16.547265 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 00:21:16.547277 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:21:16.547288 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:21:16.547301 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:21:16.547311 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:21:16.547324 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 00:21:16.547334 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 00:21:16.547348 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:21:16.547360 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:21:16.547372 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:21:16.547385 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 00:21:16.547396 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 00:21:16.547411 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 00:21:16.547421 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 00:21:16.547452 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:21:16.547463 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 00:21:16.547476 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 00:21:16.547490 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 00:21:16.547501 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:21:16.547514 systemd[1]: Reached target machines.target - Containers. Jul 2 00:21:16.547528 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 00:21:16.547540 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:21:16.547552 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:21:16.547564 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 00:21:16.547577 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:21:16.547587 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:21:16.547600 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:21:16.547610 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 00:21:16.547623 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:21:16.547637 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:21:16.547649 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 00:21:16.547662 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 00:21:16.547672 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 00:21:16.547685 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 00:21:16.547695 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:21:16.547708 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:21:16.547718 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 00:21:16.547733 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 00:21:16.547744 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:21:16.547757 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 00:21:16.547767 systemd[1]: Stopped verity-setup.service. Jul 2 00:21:16.547780 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:21:16.547791 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 00:21:16.547804 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 00:21:16.547814 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 00:21:16.547848 systemd-journald[1277]: Collecting audit messages is disabled. Jul 2 00:21:16.547871 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 00:21:16.547885 systemd-journald[1277]: Journal started Jul 2 00:21:16.547911 systemd-journald[1277]: Runtime Journal (/run/log/journal/cfe6afde6f6146a7b76319dc4b0e1f1d) is 8.0M, max 158.8M, 150.8M free. Jul 2 00:21:15.329513 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:21:15.456255 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 2 00:21:15.456671 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 00:21:16.556459 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:21:16.556499 kernel: loop: module loaded Jul 2 00:21:16.563751 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 00:21:16.566707 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 00:21:16.569276 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:21:16.572554 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:21:16.572710 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 00:21:16.575869 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:21:16.576016 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:21:16.579138 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:21:16.579302 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:21:16.582539 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:21:16.583646 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:21:16.586672 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:21:16.589477 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 00:21:16.601907 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 00:21:16.604945 kernel: fuse: init (API version 7.39) Jul 2 00:21:16.605419 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:21:16.605603 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 00:21:16.611792 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 00:21:16.622007 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 00:21:16.629576 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 00:21:16.632498 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:21:16.632542 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:21:16.635841 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 00:21:16.645577 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 00:21:16.649290 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 00:21:16.651678 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:21:16.656549 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 00:21:16.660596 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 00:21:16.663679 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:21:16.665599 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 00:21:16.668549 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:21:16.671940 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:21:16.676678 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 00:21:16.682902 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:21:16.688792 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 00:21:16.692079 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 00:21:16.821099 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:21:16.827826 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 00:21:16.838807 udevadm[1326]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 00:21:16.875019 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 00:21:16.984681 kernel: ACPI: bus type drm_connector registered Jul 2 00:21:16.980520 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:21:16.980712 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:21:17.068943 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 00:21:17.083746 kernel: loop0: detected capacity change from 0 to 56904 Jul 2 00:21:17.083844 kernel: block loop0: the capability attribute has been deprecated. Jul 2 00:21:17.272816 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:21:17.286043 systemd-journald[1277]: Time spent on flushing to /var/log/journal/cfe6afde6f6146a7b76319dc4b0e1f1d is 31.499ms for 965 entries. Jul 2 00:21:17.286043 systemd-journald[1277]: System Journal (/var/log/journal/cfe6afde6f6146a7b76319dc4b0e1f1d) is 8.0M, max 2.6G, 2.6G free. Jul 2 00:21:22.015013 systemd-journald[1277]: Received client request to flush runtime journal. Jul 2 00:21:22.015114 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:21:22.015144 kernel: loop1: detected capacity change from 0 to 211296 Jul 2 00:21:22.015165 kernel: loop2: detected capacity change from 0 to 139904 Jul 2 00:21:17.525632 systemd-tmpfiles[1320]: ACLs are not supported, ignoring. Jul 2 00:21:17.525660 systemd-tmpfiles[1320]: ACLs are not supported, ignoring. Jul 2 00:21:17.532490 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:21:17.546673 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 00:21:17.558505 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 00:21:17.561684 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 00:21:17.574599 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 00:21:19.076723 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 00:21:19.089580 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:21:20.475000 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Jul 2 00:21:20.475014 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Jul 2 00:21:20.482931 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:21:22.017084 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 00:21:23.893733 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 00:21:23.904682 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:21:23.922477 kernel: loop3: detected capacity change from 0 to 80568 Jul 2 00:21:23.928844 systemd-udevd[1348]: Using default interface naming scheme 'v255'. Jul 2 00:21:24.230741 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:21:24.231528 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 00:21:24.375084 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:21:24.388659 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:21:24.452181 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 2 00:21:24.466465 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1367) Jul 2 00:21:24.475628 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 00:21:24.541449 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 00:21:24.592749 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 00:21:24.617082 kernel: hv_vmbus: registering driver hv_balloon Jul 2 00:21:24.617182 kernel: hv_vmbus: registering driver hyperv_fb Jul 2 00:21:24.626043 kernel: loop4: detected capacity change from 0 to 56904 Jul 2 00:21:24.626146 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 2 00:21:24.628422 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 2 00:21:24.634570 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 2 00:21:24.640025 kernel: Console: switching to colour dummy device 80x25 Jul 2 00:21:24.640475 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 00:21:24.657456 kernel: loop5: detected capacity change from 0 to 211296 Jul 2 00:21:24.674817 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:21:24.679154 kernel: loop6: detected capacity change from 0 to 139904 Jul 2 00:21:24.768475 kernel: loop7: detected capacity change from 0 to 80568 Jul 2 00:21:24.784722 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:21:24.784928 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:21:24.804682 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:21:24.821532 (sd-merge)[1397]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 2 00:21:24.822837 (sd-merge)[1397]: Merged extensions into '/usr'. Jul 2 00:21:24.857532 systemd[1]: Reloading requested from client PID 1319 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 00:21:24.857548 systemd[1]: Reloading... Jul 2 00:21:24.886090 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1367) Jul 2 00:21:24.887924 systemd-networkd[1354]: lo: Link UP Jul 2 00:21:24.887934 systemd-networkd[1354]: lo: Gained carrier Jul 2 00:21:24.906380 systemd-networkd[1354]: Enumeration completed Jul 2 00:21:24.906866 systemd-networkd[1354]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:21:24.906871 systemd-networkd[1354]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:21:25.005464 kernel: mlx5_core fa4c:00:02.0 enP64076s1: Link up Jul 2 00:21:25.008912 zram_generator::config[1456]: No configuration found. Jul 2 00:21:25.039589 kernel: hv_netvsc 0022489b-bd4e-0022-489b-bd4e0022489b eth0: Data path switched to VF: enP64076s1 Jul 2 00:21:25.043276 systemd-networkd[1354]: enP64076s1: Link UP Jul 2 00:21:25.043702 systemd-networkd[1354]: eth0: Link UP Jul 2 00:21:25.044085 systemd-networkd[1354]: eth0: Gained carrier Jul 2 00:21:25.044187 systemd-networkd[1354]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:21:25.059032 systemd-networkd[1354]: enP64076s1: Gained carrier Jul 2 00:21:25.081531 systemd-networkd[1354]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 2 00:21:25.155607 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jul 2 00:21:25.280221 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:21:25.358849 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 2 00:21:25.363040 systemd[1]: Reloading finished in 505 ms. Jul 2 00:21:25.397864 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:21:25.400989 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 00:21:25.404286 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 00:21:25.429668 systemd[1]: Starting ensure-sysext.service... Jul 2 00:21:25.430924 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 00:21:25.435596 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 00:21:25.440222 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 00:21:25.444408 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:21:25.455736 systemd[1]: Reloading requested from client PID 1524 ('systemctl') (unit ensure-sysext.service)... Jul 2 00:21:25.455752 systemd[1]: Reloading... Jul 2 00:21:25.539188 zram_generator::config[1563]: No configuration found. Jul 2 00:21:25.573466 lvm[1525]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:21:25.585051 systemd-tmpfiles[1528]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:21:25.586925 systemd-tmpfiles[1528]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 00:21:25.591555 systemd-tmpfiles[1528]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:21:25.591973 systemd-tmpfiles[1528]: ACLs are not supported, ignoring. Jul 2 00:21:25.592064 systemd-tmpfiles[1528]: ACLs are not supported, ignoring. Jul 2 00:21:25.624033 systemd-tmpfiles[1528]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:21:25.624197 systemd-tmpfiles[1528]: Skipping /boot Jul 2 00:21:25.634885 systemd-tmpfiles[1528]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:21:25.635054 systemd-tmpfiles[1528]: Skipping /boot Jul 2 00:21:25.706048 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:21:25.785046 systemd[1]: Reloading finished in 328 ms. Jul 2 00:21:25.804842 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:21:25.819937 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 00:21:25.823650 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 00:21:25.827139 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:21:25.837010 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:21:25.844733 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:21:25.850712 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 00:21:25.857812 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 00:21:25.867308 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 00:21:25.871046 lvm[1628]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:21:25.878788 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:21:25.885738 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 00:21:25.893629 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:21:25.893898 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:21:25.908189 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:21:25.914704 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:21:25.929700 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:21:25.934700 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:21:25.934875 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:21:25.936159 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:21:25.937518 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:21:25.944815 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 00:21:25.954713 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:21:25.954895 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:21:25.958903 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:21:25.959067 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:21:25.968984 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:21:25.969650 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:21:25.979905 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 00:21:25.988877 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:21:25.989614 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:21:25.995717 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:21:26.004753 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:21:26.014221 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:21:26.021614 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:21:26.024415 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:21:26.024779 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 00:21:26.027627 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:21:26.033425 systemd-resolved[1630]: Positive Trust Anchors: Jul 2 00:21:26.033473 systemd-resolved[1630]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:21:26.033531 systemd-resolved[1630]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:21:26.034227 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 00:21:26.039370 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:21:26.039575 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:21:26.043400 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:21:26.043566 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:21:26.046491 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:21:26.046693 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:21:26.049944 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:21:26.050093 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:21:26.055068 systemd[1]: Finished ensure-sysext.service. Jul 2 00:21:26.060383 systemd-resolved[1630]: Using system hostname 'ci-3975.1.1-a-7b42818af6'. Jul 2 00:21:26.064515 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:21:26.064577 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:21:26.106343 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:21:26.109664 systemd[1]: Reached target network.target - Network. Jul 2 00:21:26.111826 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:21:26.135065 augenrules[1664]: No rules Jul 2 00:21:26.135947 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:21:26.528898 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 00:21:26.532970 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:21:26.548603 systemd-networkd[1354]: enP64076s1: Gained IPv6LL Jul 2 00:21:26.548959 systemd-networkd[1354]: eth0: Gained IPv6LL Jul 2 00:21:26.551846 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 00:21:26.556614 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 00:21:30.101856 ldconfig[1314]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:21:30.177631 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 00:21:30.185716 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 00:21:30.196866 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 00:21:30.200809 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:21:30.205691 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 00:21:30.209004 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 00:21:30.212446 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 00:21:30.217505 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 00:21:30.220820 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 00:21:30.223949 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:21:30.223986 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:21:30.226323 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:21:30.230129 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 00:21:30.234045 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 00:21:30.245785 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 00:21:30.249263 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 00:21:30.251979 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:21:30.254335 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:21:30.256516 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:21:30.256547 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:21:30.273830 systemd[1]: Starting chronyd.service - NTP client/server... Jul 2 00:21:30.277555 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 00:21:30.284611 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 2 00:21:30.288623 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 00:21:30.292579 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 00:21:30.298674 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 00:21:30.301858 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 00:21:30.305563 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:21:30.312625 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 00:21:30.319593 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 00:21:30.327546 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 00:21:30.331351 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 00:21:30.339579 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 00:21:30.347614 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 00:21:30.354117 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:21:30.354803 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:21:30.361690 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 00:21:30.366787 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 00:21:30.371021 jq[1681]: false Jul 2 00:21:30.379416 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:21:30.379643 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 00:21:30.386833 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:21:30.387290 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 00:21:30.410314 (chronyd)[1677]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 2 00:21:30.417291 jq[1694]: true Jul 2 00:21:30.443665 update_engine[1693]: I0702 00:21:30.443603 1693 main.cc:92] Flatcar Update Engine starting Jul 2 00:21:30.452371 extend-filesystems[1682]: Found loop4 Jul 2 00:21:30.452371 extend-filesystems[1682]: Found loop5 Jul 2 00:21:30.452371 extend-filesystems[1682]: Found loop6 Jul 2 00:21:30.452371 extend-filesystems[1682]: Found loop7 Jul 2 00:21:30.452371 extend-filesystems[1682]: Found sda Jul 2 00:21:30.452371 extend-filesystems[1682]: Found sda1 Jul 2 00:21:30.452371 extend-filesystems[1682]: Found sda2 Jul 2 00:21:30.452371 extend-filesystems[1682]: Found sda3 Jul 2 00:21:30.452371 extend-filesystems[1682]: Found usr Jul 2 00:21:30.452371 extend-filesystems[1682]: Found sda4 Jul 2 00:21:30.452371 extend-filesystems[1682]: Found sda6 Jul 2 00:21:30.452371 extend-filesystems[1682]: Found sda7 Jul 2 00:21:30.452371 extend-filesystems[1682]: Found sda9 Jul 2 00:21:30.452371 extend-filesystems[1682]: Checking size of /dev/sda9 Jul 2 00:21:30.545570 jq[1711]: true Jul 2 00:21:30.545761 tar[1699]: linux-amd64/helm Jul 2 00:21:30.462701 chronyd[1720]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 2 00:21:30.457194 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:21:30.457419 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 00:21:30.464756 (ntainerd)[1714]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 00:21:30.499637 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 00:21:30.549749 chronyd[1720]: Timezone right/UTC failed leap second check, ignoring Jul 2 00:21:30.549978 chronyd[1720]: Loaded seccomp filter (level 2) Jul 2 00:21:30.553215 systemd[1]: Started chronyd.service - NTP client/server. Jul 2 00:21:30.572931 extend-filesystems[1682]: Old size kept for /dev/sda9 Jul 2 00:21:30.572931 extend-filesystems[1682]: Found sr0 Jul 2 00:21:30.585838 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:21:30.586119 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 00:21:30.593190 dbus-daemon[1680]: [system] SELinux support is enabled Jul 2 00:21:30.593572 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 00:21:30.602655 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:21:30.602684 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 00:21:30.610231 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:21:30.610372 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 00:21:30.626880 systemd[1]: Started update-engine.service - Update Engine. Jul 2 00:21:30.637464 update_engine[1693]: I0702 00:21:30.636665 1693 update_check_scheduler.cc:74] Next update check in 3m12s Jul 2 00:21:30.639603 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 00:21:30.714973 systemd-logind[1692]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 00:21:30.716587 systemd-logind[1692]: New seat seat0. Jul 2 00:21:30.720764 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 00:21:30.744104 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1758) Jul 2 00:21:30.739274 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 00:21:30.744247 bash[1747]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:21:30.743882 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 2 00:21:30.746920 coreos-metadata[1679]: Jul 02 00:21:30.745 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 2 00:21:30.753485 coreos-metadata[1679]: Jul 02 00:21:30.753 INFO Fetch successful Jul 2 00:21:30.753485 coreos-metadata[1679]: Jul 02 00:21:30.753 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 2 00:21:30.760541 coreos-metadata[1679]: Jul 02 00:21:30.759 INFO Fetch successful Jul 2 00:21:30.762121 coreos-metadata[1679]: Jul 02 00:21:30.762 INFO Fetching http://168.63.129.16/machine/3536bc74-19be-421c-8c42-01677f87c3a6/33533aef%2D825c%2D414c%2Dbc33%2D24b2aa523cb1.%5Fci%2D3975.1.1%2Da%2D7b42818af6?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 2 00:21:30.766052 coreos-metadata[1679]: Jul 02 00:21:30.766 INFO Fetch successful Jul 2 00:21:30.766524 coreos-metadata[1679]: Jul 02 00:21:30.766 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 2 00:21:30.779611 coreos-metadata[1679]: Jul 02 00:21:30.779 INFO Fetch successful Jul 2 00:21:30.853491 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 2 00:21:30.857976 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 00:21:31.016870 locksmithd[1749]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:21:31.438488 sshd_keygen[1708]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:21:31.481482 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 00:21:31.495695 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 00:21:31.509348 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 2 00:21:31.520554 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:21:31.520773 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 00:21:31.527685 tar[1699]: linux-amd64/LICENSE Jul 2 00:21:31.527685 tar[1699]: linux-amd64/README.md Jul 2 00:21:31.531724 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 00:21:31.558910 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 00:21:31.562624 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 00:21:31.573925 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 00:21:31.584400 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 00:21:31.587346 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 00:21:31.592294 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 2 00:21:31.981619 containerd[1714]: time="2024-07-02T00:21:31.980129900Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 00:21:32.010276 containerd[1714]: time="2024-07-02T00:21:32.010213600Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 00:21:32.010276 containerd[1714]: time="2024-07-02T00:21:32.010275800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:21:32.014114 containerd[1714]: time="2024-07-02T00:21:32.013941400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:21:32.014114 containerd[1714]: time="2024-07-02T00:21:32.013987500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:21:32.014535 containerd[1714]: time="2024-07-02T00:21:32.014346500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:21:32.014535 containerd[1714]: time="2024-07-02T00:21:32.014531600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:21:32.014866 containerd[1714]: time="2024-07-02T00:21:32.014654800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 00:21:32.014866 containerd[1714]: time="2024-07-02T00:21:32.014725200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:21:32.014866 containerd[1714]: time="2024-07-02T00:21:32.014743100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:21:32.014866 containerd[1714]: time="2024-07-02T00:21:32.014823400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:21:32.015072 containerd[1714]: time="2024-07-02T00:21:32.015047300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:21:32.015115 containerd[1714]: time="2024-07-02T00:21:32.015081800Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:21:32.015115 containerd[1714]: time="2024-07-02T00:21:32.015098500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:21:32.015259 containerd[1714]: time="2024-07-02T00:21:32.015235200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:21:32.015259 containerd[1714]: time="2024-07-02T00:21:32.015254400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:21:32.015350 containerd[1714]: time="2024-07-02T00:21:32.015321300Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:21:32.015350 containerd[1714]: time="2024-07-02T00:21:32.015338600Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:21:32.072780 containerd[1714]: time="2024-07-02T00:21:32.072059000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:21:32.072780 containerd[1714]: time="2024-07-02T00:21:32.072116000Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:21:32.072780 containerd[1714]: time="2024-07-02T00:21:32.072135400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:21:32.072780 containerd[1714]: time="2024-07-02T00:21:32.072176500Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 00:21:32.072780 containerd[1714]: time="2024-07-02T00:21:32.072197700Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 00:21:32.072780 containerd[1714]: time="2024-07-02T00:21:32.072213200Z" level=info msg="NRI interface is disabled by configuration." Jul 2 00:21:32.072780 containerd[1714]: time="2024-07-02T00:21:32.072229900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:21:32.072780 containerd[1714]: time="2024-07-02T00:21:32.072424300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 00:21:32.072780 containerd[1714]: time="2024-07-02T00:21:32.072467300Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 00:21:32.072780 containerd[1714]: time="2024-07-02T00:21:32.072486700Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 00:21:32.072780 containerd[1714]: time="2024-07-02T00:21:32.072526000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 00:21:32.072780 containerd[1714]: time="2024-07-02T00:21:32.072547400Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:21:32.072780 containerd[1714]: time="2024-07-02T00:21:32.072571000Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:21:32.072780 containerd[1714]: time="2024-07-02T00:21:32.072589600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:21:32.073313 containerd[1714]: time="2024-07-02T00:21:32.072607800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:21:32.073313 containerd[1714]: time="2024-07-02T00:21:32.072626300Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:21:32.073313 containerd[1714]: time="2024-07-02T00:21:32.072645700Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:21:32.073313 containerd[1714]: time="2024-07-02T00:21:32.072663600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:21:32.073313 containerd[1714]: time="2024-07-02T00:21:32.072679500Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:21:32.073313 containerd[1714]: time="2024-07-02T00:21:32.072807200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:21:32.073313 containerd[1714]: time="2024-07-02T00:21:32.073118400Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:21:32.073313 containerd[1714]: time="2024-07-02T00:21:32.073154300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:21:32.073313 containerd[1714]: time="2024-07-02T00:21:32.073176500Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 00:21:32.073313 containerd[1714]: time="2024-07-02T00:21:32.073207800Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:21:32.073313 containerd[1714]: time="2024-07-02T00:21:32.073277500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:21:32.073313 containerd[1714]: time="2024-07-02T00:21:32.073309600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:21:32.073751 containerd[1714]: time="2024-07-02T00:21:32.073328100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:21:32.073751 containerd[1714]: time="2024-07-02T00:21:32.073346500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:21:32.073751 containerd[1714]: time="2024-07-02T00:21:32.073364100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:21:32.073751 containerd[1714]: time="2024-07-02T00:21:32.073381800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:21:32.073751 containerd[1714]: time="2024-07-02T00:21:32.073399200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:21:32.073751 containerd[1714]: time="2024-07-02T00:21:32.073416100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:21:32.073751 containerd[1714]: time="2024-07-02T00:21:32.073449700Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:21:32.073751 containerd[1714]: time="2024-07-02T00:21:32.073610500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 00:21:32.073751 containerd[1714]: time="2024-07-02T00:21:32.073632200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 00:21:32.073751 containerd[1714]: time="2024-07-02T00:21:32.073651700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:21:32.073751 containerd[1714]: time="2024-07-02T00:21:32.073669200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 00:21:32.073751 containerd[1714]: time="2024-07-02T00:21:32.073716700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:21:32.073751 containerd[1714]: time="2024-07-02T00:21:32.073738000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 00:21:32.073751 containerd[1714]: time="2024-07-02T00:21:32.073756000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:21:32.074200 containerd[1714]: time="2024-07-02T00:21:32.073772800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:21:32.074236 containerd[1714]: time="2024-07-02T00:21:32.074122400Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:21:32.074236 containerd[1714]: time="2024-07-02T00:21:32.074197800Z" level=info msg="Connect containerd service" Jul 2 00:21:32.074236 containerd[1714]: time="2024-07-02T00:21:32.074239200Z" level=info msg="using legacy CRI server" Jul 2 00:21:32.074511 containerd[1714]: time="2024-07-02T00:21:32.074249100Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 00:21:32.074511 containerd[1714]: time="2024-07-02T00:21:32.074375600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:21:32.075768 containerd[1714]: time="2024-07-02T00:21:32.075123000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:21:32.075768 containerd[1714]: time="2024-07-02T00:21:32.075186700Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:21:32.075768 containerd[1714]: time="2024-07-02T00:21:32.075210800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 00:21:32.075768 containerd[1714]: time="2024-07-02T00:21:32.075228100Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:21:32.075768 containerd[1714]: time="2024-07-02T00:21:32.075244900Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 00:21:32.075768 containerd[1714]: time="2024-07-02T00:21:32.075531800Z" level=info msg="Start subscribing containerd event" Jul 2 00:21:32.075768 containerd[1714]: time="2024-07-02T00:21:32.075581900Z" level=info msg="Start recovering state" Jul 2 00:21:32.075768 containerd[1714]: time="2024-07-02T00:21:32.075688000Z" level=info msg="Start event monitor" Jul 2 00:21:32.075768 containerd[1714]: time="2024-07-02T00:21:32.075715200Z" level=info msg="Start snapshots syncer" Jul 2 00:21:32.075768 containerd[1714]: time="2024-07-02T00:21:32.075727100Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:21:32.075768 containerd[1714]: time="2024-07-02T00:21:32.075737000Z" level=info msg="Start streaming server" Jul 2 00:21:32.076211 containerd[1714]: time="2024-07-02T00:21:32.076025400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:21:32.076211 containerd[1714]: time="2024-07-02T00:21:32.076079700Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:21:32.076236 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 00:21:32.077126 containerd[1714]: time="2024-07-02T00:21:32.077106400Z" level=info msg="containerd successfully booted in 0.098816s" Jul 2 00:21:32.288184 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:21:32.292184 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 00:21:32.293747 (kubelet)[1843]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:21:32.297184 systemd[1]: Startup finished in 890ms (firmware) + 28.806s (loader) + 920ms (kernel) + 11.685s (initrd) + 26.188s (userspace) = 1min 8.490s. Jul 2 00:21:32.734396 login[1826]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 00:21:32.735048 login[1827]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 00:21:32.749393 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 00:21:32.756547 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 00:21:32.759121 systemd-logind[1692]: New session 2 of user core. Jul 2 00:21:32.764131 systemd-logind[1692]: New session 1 of user core. Jul 2 00:21:32.783516 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 00:21:32.792811 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 00:21:32.796921 (systemd)[1854]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:33.116582 systemd[1854]: Queued start job for default target default.target. Jul 2 00:21:33.122852 systemd[1854]: Created slice app.slice - User Application Slice. Jul 2 00:21:33.122960 systemd[1854]: Reached target paths.target - Paths. Jul 2 00:21:33.122979 systemd[1854]: Reached target timers.target - Timers. Jul 2 00:21:33.124558 systemd[1854]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 00:21:33.138227 systemd[1854]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 00:21:33.138503 systemd[1854]: Reached target sockets.target - Sockets. Jul 2 00:21:33.138616 systemd[1854]: Reached target basic.target - Basic System. Jul 2 00:21:33.138675 systemd[1854]: Reached target default.target - Main User Target. Jul 2 00:21:33.138712 systemd[1854]: Startup finished in 332ms. Jul 2 00:21:33.138900 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 00:21:33.144605 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 00:21:33.145552 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 00:21:33.182963 waagent[1832]: 2024-07-02T00:21:33.182871Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jul 2 00:21:33.217912 waagent[1832]: 2024-07-02T00:21:33.183282Z INFO Daemon Daemon OS: flatcar 3975.1.1 Jul 2 00:21:33.217912 waagent[1832]: 2024-07-02T00:21:33.184260Z INFO Daemon Daemon Python: 3.11.9 Jul 2 00:21:33.217912 waagent[1832]: 2024-07-02T00:21:33.185343Z INFO Daemon Daemon Run daemon Jul 2 00:21:33.217912 waagent[1832]: 2024-07-02T00:21:33.185962Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3975.1.1' Jul 2 00:21:33.217912 waagent[1832]: 2024-07-02T00:21:33.186670Z INFO Daemon Daemon Using waagent for provisioning Jul 2 00:21:33.217912 waagent[1832]: 2024-07-02T00:21:33.187601Z INFO Daemon Daemon Activate resource disk Jul 2 00:21:33.217912 waagent[1832]: 2024-07-02T00:21:33.188282Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 2 00:21:33.217912 waagent[1832]: 2024-07-02T00:21:33.192741Z INFO Daemon Daemon Found device: None Jul 2 00:21:33.217912 waagent[1832]: 2024-07-02T00:21:33.193824Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 2 00:21:33.217912 waagent[1832]: 2024-07-02T00:21:33.194246Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 2 00:21:33.217912 waagent[1832]: 2024-07-02T00:21:33.196493Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 2 00:21:33.217912 waagent[1832]: 2024-07-02T00:21:33.197149Z INFO Daemon Daemon Running default provisioning handler Jul 2 00:21:33.235826 waagent[1832]: 2024-07-02T00:21:33.222827Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 2 00:21:33.235826 waagent[1832]: 2024-07-02T00:21:33.223884Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 2 00:21:33.235826 waagent[1832]: 2024-07-02T00:21:33.225022Z INFO Daemon Daemon cloud-init is enabled: False Jul 2 00:21:33.235826 waagent[1832]: 2024-07-02T00:21:33.225792Z INFO Daemon Daemon Copying ovf-env.xml Jul 2 00:21:33.307106 waagent[1832]: 2024-07-02T00:21:33.305633Z INFO Daemon Daemon Successfully mounted dvd Jul 2 00:21:33.325221 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 2 00:21:33.328231 waagent[1832]: 2024-07-02T00:21:33.327543Z INFO Daemon Daemon Detect protocol endpoint Jul 2 00:21:33.330260 waagent[1832]: 2024-07-02T00:21:33.329904Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 2 00:21:33.333775 waagent[1832]: 2024-07-02T00:21:33.333383Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 2 00:21:33.336461 waagent[1832]: 2024-07-02T00:21:33.336392Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 2 00:21:33.336861 waagent[1832]: 2024-07-02T00:21:33.336816Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 2 00:21:33.337581 waagent[1832]: 2024-07-02T00:21:33.337543Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 2 00:21:33.349295 waagent[1832]: 2024-07-02T00:21:33.347778Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 2 00:21:33.349295 waagent[1832]: 2024-07-02T00:21:33.348182Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 2 00:21:33.349295 waagent[1832]: 2024-07-02T00:21:33.348408Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 2 00:21:33.357562 kubelet[1843]: E0702 00:21:33.357509 1843 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:21:33.360304 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:21:33.360518 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:21:33.360905 systemd[1]: kubelet.service: Consumed 1.029s CPU time. Jul 2 00:21:33.646919 waagent[1832]: 2024-07-02T00:21:33.646820Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 2 00:21:33.650265 waagent[1832]: 2024-07-02T00:21:33.650159Z INFO Daemon Daemon Forcing an update of the goal state. Jul 2 00:21:33.656299 waagent[1832]: 2024-07-02T00:21:33.656235Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 2 00:21:33.686957 waagent[1832]: 2024-07-02T00:21:33.686887Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.151 Jul 2 00:21:33.703925 waagent[1832]: 2024-07-02T00:21:33.687772Z INFO Daemon Jul 2 00:21:33.703925 waagent[1832]: 2024-07-02T00:21:33.688561Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: d6a984f9-860b-4050-be90-ca2c8633f3e7 eTag: 9572878417049915862 source: Fabric] Jul 2 00:21:33.703925 waagent[1832]: 2024-07-02T00:21:33.689829Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 2 00:21:33.703925 waagent[1832]: 2024-07-02T00:21:33.691207Z INFO Daemon Jul 2 00:21:33.703925 waagent[1832]: 2024-07-02T00:21:33.691916Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 2 00:21:33.703925 waagent[1832]: 2024-07-02T00:21:33.696702Z INFO Daemon Daemon Downloading artifacts profile blob Jul 2 00:21:33.768760 waagent[1832]: 2024-07-02T00:21:33.768683Z INFO Daemon Downloaded certificate {'thumbprint': '8B71A16855636719BFF03664C093981D60FB58C1', 'hasPrivateKey': False} Jul 2 00:21:33.773573 waagent[1832]: 2024-07-02T00:21:33.773514Z INFO Daemon Downloaded certificate {'thumbprint': '2F7A9AE130539363F7FC42D86823EBE11DF3F057', 'hasPrivateKey': True} Jul 2 00:21:33.779090 waagent[1832]: 2024-07-02T00:21:33.774055Z INFO Daemon Fetch goal state completed Jul 2 00:21:33.781422 waagent[1832]: 2024-07-02T00:21:33.781377Z INFO Daemon Daemon Starting provisioning Jul 2 00:21:33.787658 waagent[1832]: 2024-07-02T00:21:33.781592Z INFO Daemon Daemon Handle ovf-env.xml. Jul 2 00:21:33.787658 waagent[1832]: 2024-07-02T00:21:33.782165Z INFO Daemon Daemon Set hostname [ci-3975.1.1-a-7b42818af6] Jul 2 00:21:33.967829 waagent[1832]: 2024-07-02T00:21:33.967733Z INFO Daemon Daemon Publish hostname [ci-3975.1.1-a-7b42818af6] Jul 2 00:21:33.975816 waagent[1832]: 2024-07-02T00:21:33.968472Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 2 00:21:33.975816 waagent[1832]: 2024-07-02T00:21:33.969265Z INFO Daemon Daemon Primary interface is [eth0] Jul 2 00:21:33.996149 systemd-networkd[1354]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:21:33.996159 systemd-networkd[1354]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:21:33.996208 systemd-networkd[1354]: eth0: DHCP lease lost Jul 2 00:21:33.997511 waagent[1832]: 2024-07-02T00:21:33.997394Z INFO Daemon Daemon Create user account if not exists Jul 2 00:21:34.015853 waagent[1832]: 2024-07-02T00:21:33.997828Z INFO Daemon Daemon User core already exists, skip useradd Jul 2 00:21:34.015853 waagent[1832]: 2024-07-02T00:21:33.999167Z INFO Daemon Daemon Configure sudoer Jul 2 00:21:34.015853 waagent[1832]: 2024-07-02T00:21:34.000257Z INFO Daemon Daemon Configure sshd Jul 2 00:21:34.015853 waagent[1832]: 2024-07-02T00:21:34.001090Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 2 00:21:34.015853 waagent[1832]: 2024-07-02T00:21:34.001838Z INFO Daemon Daemon Deploy ssh public key. Jul 2 00:21:34.017531 systemd-networkd[1354]: eth0: DHCPv6 lease lost Jul 2 00:21:34.048516 systemd-networkd[1354]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 2 00:21:35.292273 waagent[1832]: 2024-07-02T00:21:35.292189Z INFO Daemon Daemon Provisioning complete Jul 2 00:21:35.306748 waagent[1832]: 2024-07-02T00:21:35.306677Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 2 00:21:35.313175 waagent[1832]: 2024-07-02T00:21:35.307023Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 2 00:21:35.313175 waagent[1832]: 2024-07-02T00:21:35.308762Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jul 2 00:21:35.432318 waagent[1905]: 2024-07-02T00:21:35.432222Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jul 2 00:21:35.432771 waagent[1905]: 2024-07-02T00:21:35.432373Z INFO ExtHandler ExtHandler OS: flatcar 3975.1.1 Jul 2 00:21:35.432771 waagent[1905]: 2024-07-02T00:21:35.432473Z INFO ExtHandler ExtHandler Python: 3.11.9 Jul 2 00:21:35.503927 waagent[1905]: 2024-07-02T00:21:35.503818Z INFO ExtHandler ExtHandler Distro: flatcar-3975.1.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 2 00:21:35.504185 waagent[1905]: 2024-07-02T00:21:35.504124Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 00:21:35.504296 waagent[1905]: 2024-07-02T00:21:35.504244Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 00:21:35.513025 waagent[1905]: 2024-07-02T00:21:35.512944Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 2 00:21:35.518638 waagent[1905]: 2024-07-02T00:21:35.518586Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.151 Jul 2 00:21:35.519082 waagent[1905]: 2024-07-02T00:21:35.519030Z INFO ExtHandler Jul 2 00:21:35.519159 waagent[1905]: 2024-07-02T00:21:35.519120Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 7101d94f-21c7-4a9a-9540-d75ab2802515 eTag: 9572878417049915862 source: Fabric] Jul 2 00:21:35.519488 waagent[1905]: 2024-07-02T00:21:35.519420Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 2 00:21:35.520056 waagent[1905]: 2024-07-02T00:21:35.519998Z INFO ExtHandler Jul 2 00:21:35.520119 waagent[1905]: 2024-07-02T00:21:35.520082Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 2 00:21:35.524128 waagent[1905]: 2024-07-02T00:21:35.524085Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 2 00:21:35.598376 waagent[1905]: 2024-07-02T00:21:35.598231Z INFO ExtHandler Downloaded certificate {'thumbprint': '8B71A16855636719BFF03664C093981D60FB58C1', 'hasPrivateKey': False} Jul 2 00:21:35.598807 waagent[1905]: 2024-07-02T00:21:35.598749Z INFO ExtHandler Downloaded certificate {'thumbprint': '2F7A9AE130539363F7FC42D86823EBE11DF3F057', 'hasPrivateKey': True} Jul 2 00:21:35.599234 waagent[1905]: 2024-07-02T00:21:35.599184Z INFO ExtHandler Fetch goal state completed Jul 2 00:21:35.614382 waagent[1905]: 2024-07-02T00:21:35.614309Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1905 Jul 2 00:21:35.614556 waagent[1905]: 2024-07-02T00:21:35.614508Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 2 00:21:35.616138 waagent[1905]: 2024-07-02T00:21:35.616077Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3975.1.1', '', 'Flatcar Container Linux by Kinvolk'] Jul 2 00:21:35.616531 waagent[1905]: 2024-07-02T00:21:35.616487Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 2 00:21:35.830948 waagent[1905]: 2024-07-02T00:21:35.830266Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 2 00:21:35.830948 waagent[1905]: 2024-07-02T00:21:35.830565Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 2 00:21:35.838050 waagent[1905]: 2024-07-02T00:21:35.837867Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 2 00:21:35.844648 systemd[1]: Reloading requested from client PID 1920 ('systemctl') (unit waagent.service)... Jul 2 00:21:35.844664 systemd[1]: Reloading... Jul 2 00:21:35.925465 zram_generator::config[1951]: No configuration found. Jul 2 00:21:36.050215 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:21:36.132440 systemd[1]: Reloading finished in 287 ms. Jul 2 00:21:36.159495 waagent[1905]: 2024-07-02T00:21:36.159002Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jul 2 00:21:36.168274 systemd[1]: Reloading requested from client PID 2008 ('systemctl') (unit waagent.service)... Jul 2 00:21:36.168290 systemd[1]: Reloading... Jul 2 00:21:36.238463 zram_generator::config[2037]: No configuration found. Jul 2 00:21:36.368993 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:21:36.445260 systemd[1]: Reloading finished in 276 ms. Jul 2 00:21:36.471891 waagent[1905]: 2024-07-02T00:21:36.471719Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 2 00:21:36.472246 waagent[1905]: 2024-07-02T00:21:36.471939Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 2 00:21:39.630923 waagent[1905]: 2024-07-02T00:21:39.630835Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 2 00:21:39.631590 waagent[1905]: 2024-07-02T00:21:39.631531Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jul 2 00:21:39.632328 waagent[1905]: 2024-07-02T00:21:39.632276Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 2 00:21:39.633125 waagent[1905]: 2024-07-02T00:21:39.633069Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 2 00:21:39.633552 waagent[1905]: 2024-07-02T00:21:39.633502Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 00:21:39.633623 waagent[1905]: 2024-07-02T00:21:39.633548Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 00:21:39.633718 waagent[1905]: 2024-07-02T00:21:39.633651Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 00:21:39.633824 waagent[1905]: 2024-07-02T00:21:39.633777Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 00:21:39.634091 waagent[1905]: 2024-07-02T00:21:39.634042Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 2 00:21:39.634380 waagent[1905]: 2024-07-02T00:21:39.634317Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 2 00:21:39.634499 waagent[1905]: 2024-07-02T00:21:39.634386Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 2 00:21:39.634817 waagent[1905]: 2024-07-02T00:21:39.634769Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 2 00:21:39.635014 waagent[1905]: 2024-07-02T00:21:39.634959Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 2 00:21:39.635366 waagent[1905]: 2024-07-02T00:21:39.635308Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 2 00:21:39.635584 waagent[1905]: 2024-07-02T00:21:39.635521Z INFO EnvHandler ExtHandler Configure routes Jul 2 00:21:39.635728 waagent[1905]: 2024-07-02T00:21:39.635686Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 2 00:21:39.635728 waagent[1905]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 2 00:21:39.635728 waagent[1905]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jul 2 00:21:39.635728 waagent[1905]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 2 00:21:39.635728 waagent[1905]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 2 00:21:39.635728 waagent[1905]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 00:21:39.635728 waagent[1905]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 00:21:39.636916 waagent[1905]: 2024-07-02T00:21:39.636855Z INFO EnvHandler ExtHandler Gateway:None Jul 2 00:21:39.637386 waagent[1905]: 2024-07-02T00:21:39.637335Z INFO EnvHandler ExtHandler Routes:None Jul 2 00:21:39.646467 waagent[1905]: 2024-07-02T00:21:39.646413Z INFO ExtHandler ExtHandler Jul 2 00:21:39.646583 waagent[1905]: 2024-07-02T00:21:39.646544Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 753a8dd2-3ad6-41c7-92c3-104f1f6d0d01 correlation 5aa467c5-6ae5-42f2-a02c-b7396b2e6187 created: 2024-07-02T00:20:12.386726Z] Jul 2 00:21:39.646924 waagent[1905]: 2024-07-02T00:21:39.646878Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 2 00:21:39.647476 waagent[1905]: 2024-07-02T00:21:39.647410Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jul 2 00:21:39.679028 waagent[1905]: 2024-07-02T00:21:39.678969Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 27E3C5B8-F91B-4E74-A698-EF71729504B5;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jul 2 00:21:40.018165 waagent[1905]: 2024-07-02T00:21:40.018004Z INFO MonitorHandler ExtHandler Network interfaces: Jul 2 00:21:40.018165 waagent[1905]: Executing ['ip', '-a', '-o', 'link']: Jul 2 00:21:40.018165 waagent[1905]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 2 00:21:40.018165 waagent[1905]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9b:bd:4e brd ff:ff:ff:ff:ff:ff Jul 2 00:21:40.018165 waagent[1905]: 3: enP64076s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9b:bd:4e brd ff:ff:ff:ff:ff:ff\ altname enP64076p0s2 Jul 2 00:21:40.018165 waagent[1905]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 2 00:21:40.018165 waagent[1905]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 2 00:21:40.018165 waagent[1905]: 2: eth0 inet 10.200.8.39/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 2 00:21:40.018165 waagent[1905]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 2 00:21:40.018165 waagent[1905]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 2 00:21:40.018165 waagent[1905]: 2: eth0 inet6 fe80::222:48ff:fe9b:bd4e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 2 00:21:40.018165 waagent[1905]: 3: enP64076s1 inet6 fe80::222:48ff:fe9b:bd4e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 2 00:21:40.279202 waagent[1905]: 2024-07-02T00:21:40.279070Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jul 2 00:21:40.279202 waagent[1905]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 00:21:40.279202 waagent[1905]: pkts bytes target prot opt in out source destination Jul 2 00:21:40.279202 waagent[1905]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 2 00:21:40.279202 waagent[1905]: pkts bytes target prot opt in out source destination Jul 2 00:21:40.279202 waagent[1905]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 00:21:40.279202 waagent[1905]: pkts bytes target prot opt in out source destination Jul 2 00:21:40.279202 waagent[1905]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 2 00:21:40.279202 waagent[1905]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 2 00:21:40.279202 waagent[1905]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 2 00:21:40.282626 waagent[1905]: 2024-07-02T00:21:40.282562Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 2 00:21:40.282626 waagent[1905]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 00:21:40.282626 waagent[1905]: pkts bytes target prot opt in out source destination Jul 2 00:21:40.282626 waagent[1905]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 2 00:21:40.282626 waagent[1905]: pkts bytes target prot opt in out source destination Jul 2 00:21:40.282626 waagent[1905]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 00:21:40.282626 waagent[1905]: pkts bytes target prot opt in out source destination Jul 2 00:21:40.282626 waagent[1905]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 2 00:21:40.282626 waagent[1905]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 2 00:21:40.282626 waagent[1905]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 2 00:21:40.282997 waagent[1905]: 2024-07-02T00:21:40.282873Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 2 00:21:43.600405 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 00:21:43.605673 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:21:45.795856 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:21:45.807730 (kubelet)[2137]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:21:46.407223 kubelet[2137]: E0702 00:21:46.407125 2137 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:21:46.411353 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:21:46.411597 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:21:54.343492 chronyd[1720]: Selected source PHC0 Jul 2 00:21:56.600235 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 00:21:56.605707 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:21:56.694061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:21:56.698566 (kubelet)[2157]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:21:56.740752 kubelet[2157]: E0702 00:21:56.740692 2157 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:21:56.743665 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:21:56.743872 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:22:06.850259 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 00:22:06.855690 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:22:07.209199 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:22:07.220756 (kubelet)[2174]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:22:07.484363 kubelet[2174]: E0702 00:22:07.484242 2174 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:22:07.486945 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:22:07.487150 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:22:12.738242 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jul 2 00:22:15.545747 update_engine[1693]: I0702 00:22:15.545647 1693 update_attempter.cc:509] Updating boot flags... Jul 2 00:22:15.619463 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2194) Jul 2 00:22:15.725093 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2201) Jul 2 00:22:17.600079 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 2 00:22:17.607030 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:22:17.950013 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:22:17.957725 (kubelet)[2256]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:22:18.003375 kubelet[2256]: E0702 00:22:18.003315 2256 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:22:18.005967 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:22:18.006185 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:22:19.418761 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 00:22:19.423722 systemd[1]: Started sshd@0-10.200.8.39:22-10.200.16.10:49656.service - OpenSSH per-connection server daemon (10.200.16.10:49656). Jul 2 00:22:20.106492 sshd[2265]: Accepted publickey for core from 10.200.16.10 port 49656 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:22:20.108169 sshd[2265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:20.113740 systemd-logind[1692]: New session 3 of user core. Jul 2 00:22:20.120636 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 00:22:20.674950 systemd[1]: Started sshd@1-10.200.8.39:22-10.200.16.10:49672.service - OpenSSH per-connection server daemon (10.200.16.10:49672). Jul 2 00:22:21.322848 sshd[2270]: Accepted publickey for core from 10.200.16.10 port 49672 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:22:21.324631 sshd[2270]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:21.328750 systemd-logind[1692]: New session 4 of user core. Jul 2 00:22:21.334731 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 00:22:21.782272 sshd[2270]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:21.786316 systemd[1]: sshd@1-10.200.8.39:22-10.200.16.10:49672.service: Deactivated successfully. Jul 2 00:22:21.788159 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:22:21.788856 systemd-logind[1692]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:22:21.789862 systemd-logind[1692]: Removed session 4. Jul 2 00:22:21.895206 systemd[1]: Started sshd@2-10.200.8.39:22-10.200.16.10:49674.service - OpenSSH per-connection server daemon (10.200.16.10:49674). Jul 2 00:22:22.542311 sshd[2277]: Accepted publickey for core from 10.200.16.10 port 49674 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:22:22.544032 sshd[2277]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:22.548702 systemd-logind[1692]: New session 5 of user core. Jul 2 00:22:22.554641 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 00:22:22.996114 sshd[2277]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:22.999669 systemd[1]: sshd@2-10.200.8.39:22-10.200.16.10:49674.service: Deactivated successfully. Jul 2 00:22:23.001943 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:22:23.003699 systemd-logind[1692]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:22:23.004930 systemd-logind[1692]: Removed session 5. Jul 2 00:22:23.109671 systemd[1]: Started sshd@3-10.200.8.39:22-10.200.16.10:49676.service - OpenSSH per-connection server daemon (10.200.16.10:49676). Jul 2 00:22:23.770948 sshd[2284]: Accepted publickey for core from 10.200.16.10 port 49676 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:22:23.772678 sshd[2284]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:23.776531 systemd-logind[1692]: New session 6 of user core. Jul 2 00:22:23.783569 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 00:22:24.229957 sshd[2284]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:24.233109 systemd[1]: sshd@3-10.200.8.39:22-10.200.16.10:49676.service: Deactivated successfully. Jul 2 00:22:24.235183 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:22:24.236693 systemd-logind[1692]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:22:24.237751 systemd-logind[1692]: Removed session 6. Jul 2 00:22:24.343160 systemd[1]: Started sshd@4-10.200.8.39:22-10.200.16.10:49684.service - OpenSSH per-connection server daemon (10.200.16.10:49684). Jul 2 00:22:24.984937 sshd[2291]: Accepted publickey for core from 10.200.16.10 port 49684 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:22:24.986615 sshd[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:24.991283 systemd-logind[1692]: New session 7 of user core. Jul 2 00:22:24.998575 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 00:22:25.458981 sudo[2294]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 00:22:25.459312 sudo[2294]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:22:25.488704 sudo[2294]: pam_unix(sudo:session): session closed for user root Jul 2 00:22:25.592215 sshd[2291]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:25.596071 systemd[1]: sshd@4-10.200.8.39:22-10.200.16.10:49684.service: Deactivated successfully. Jul 2 00:22:25.598499 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:22:25.600309 systemd-logind[1692]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:22:25.601252 systemd-logind[1692]: Removed session 7. Jul 2 00:22:25.706643 systemd[1]: Started sshd@5-10.200.8.39:22-10.200.16.10:49688.service - OpenSSH per-connection server daemon (10.200.16.10:49688). Jul 2 00:22:26.355297 sshd[2299]: Accepted publickey for core from 10.200.16.10 port 49688 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:22:26.356939 sshd[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:26.361949 systemd-logind[1692]: New session 8 of user core. Jul 2 00:22:26.367597 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 00:22:26.713962 sudo[2303]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 00:22:26.714291 sudo[2303]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:22:26.717680 sudo[2303]: pam_unix(sudo:session): session closed for user root Jul 2 00:22:26.722416 sudo[2302]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 00:22:26.722751 sudo[2302]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:22:26.740761 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 00:22:26.742241 auditctl[2306]: No rules Jul 2 00:22:26.742616 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 00:22:26.742810 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 00:22:26.745286 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:22:26.770550 augenrules[2324]: No rules Jul 2 00:22:26.771927 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:22:26.773663 sudo[2302]: pam_unix(sudo:session): session closed for user root Jul 2 00:22:26.878604 sshd[2299]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:26.882177 systemd[1]: sshd@5-10.200.8.39:22-10.200.16.10:49688.service: Deactivated successfully. Jul 2 00:22:26.884490 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:22:26.886245 systemd-logind[1692]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:22:26.887193 systemd-logind[1692]: Removed session 8. Jul 2 00:22:26.992385 systemd[1]: Started sshd@6-10.200.8.39:22-10.200.16.10:49702.service - OpenSSH per-connection server daemon (10.200.16.10:49702). Jul 2 00:22:27.640447 sshd[2332]: Accepted publickey for core from 10.200.16.10 port 49702 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:22:27.642113 sshd[2332]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:27.646678 systemd-logind[1692]: New session 9 of user core. Jul 2 00:22:27.652591 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 00:22:27.994973 sudo[2335]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:22:27.995315 sudo[2335]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:22:28.099994 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 2 00:22:28.106685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:22:28.310068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:22:28.315906 (kubelet)[2348]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:22:28.734277 kubelet[2348]: E0702 00:22:28.734175 2348 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:22:28.736969 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:22:28.737169 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:22:29.024715 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 00:22:29.026386 (dockerd)[2360]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 00:22:29.896490 dockerd[2360]: time="2024-07-02T00:22:29.896411145Z" level=info msg="Starting up" Jul 2 00:22:30.025990 dockerd[2360]: time="2024-07-02T00:22:30.025944362Z" level=info msg="Loading containers: start." Jul 2 00:22:30.197461 kernel: Initializing XFRM netlink socket Jul 2 00:22:30.478298 systemd-networkd[1354]: docker0: Link UP Jul 2 00:22:30.501676 dockerd[2360]: time="2024-07-02T00:22:30.501633333Z" level=info msg="Loading containers: done." Jul 2 00:22:30.830880 dockerd[2360]: time="2024-07-02T00:22:30.830768350Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 00:22:30.831041 dockerd[2360]: time="2024-07-02T00:22:30.831000153Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 00:22:30.831151 dockerd[2360]: time="2024-07-02T00:22:30.831126055Z" level=info msg="Daemon has completed initialization" Jul 2 00:22:30.888033 dockerd[2360]: time="2024-07-02T00:22:30.887558746Z" level=info msg="API listen on /run/docker.sock" Jul 2 00:22:30.887740 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 00:22:32.708656 containerd[1714]: time="2024-07-02T00:22:32.708612987Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jul 2 00:22:33.348486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1299917813.mount: Deactivated successfully. Jul 2 00:22:35.239848 containerd[1714]: time="2024-07-02T00:22:35.239784632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:35.242231 containerd[1714]: time="2024-07-02T00:22:35.242161365Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.6: active requests=0, bytes read=35235845" Jul 2 00:22:35.247249 containerd[1714]: time="2024-07-02T00:22:35.247181534Z" level=info msg="ImageCreate event name:\"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:35.253741 containerd[1714]: time="2024-07-02T00:22:35.253680024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:35.254959 containerd[1714]: time="2024-07-02T00:22:35.254666738Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.6\" with image id \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\", size \"35232637\" in 2.546011151s" Jul 2 00:22:35.254959 containerd[1714]: time="2024-07-02T00:22:35.254708038Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\"" Jul 2 00:22:35.276204 containerd[1714]: time="2024-07-02T00:22:35.276159635Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jul 2 00:22:37.170510 containerd[1714]: time="2024-07-02T00:22:37.170453751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:37.175164 containerd[1714]: time="2024-07-02T00:22:37.175102516Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.6: active requests=0, bytes read=32069755" Jul 2 00:22:37.178574 containerd[1714]: time="2024-07-02T00:22:37.178518663Z" level=info msg="ImageCreate event name:\"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:37.186187 containerd[1714]: time="2024-07-02T00:22:37.186135668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:37.187301 containerd[1714]: time="2024-07-02T00:22:37.187166583Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.6\" with image id \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\", size \"33590639\" in 1.910778945s" Jul 2 00:22:37.187301 containerd[1714]: time="2024-07-02T00:22:37.187205083Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\"" Jul 2 00:22:37.208806 containerd[1714]: time="2024-07-02T00:22:37.208768982Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jul 2 00:22:38.417716 containerd[1714]: time="2024-07-02T00:22:38.417668512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:38.419965 containerd[1714]: time="2024-07-02T00:22:38.419903043Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.6: active requests=0, bytes read=17153811" Jul 2 00:22:38.423421 containerd[1714]: time="2024-07-02T00:22:38.423366991Z" level=info msg="ImageCreate event name:\"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:38.428250 containerd[1714]: time="2024-07-02T00:22:38.428190158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:38.429296 containerd[1714]: time="2024-07-02T00:22:38.429158371Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.6\" with image id \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\", size \"18674713\" in 1.220343089s" Jul 2 00:22:38.429296 containerd[1714]: time="2024-07-02T00:22:38.429197272Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\"" Jul 2 00:22:38.450048 containerd[1714]: time="2024-07-02T00:22:38.450016560Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jul 2 00:22:38.850233 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jul 2 00:22:38.856884 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:22:39.250027 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:22:39.260755 (kubelet)[2572]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:22:39.481198 kubelet[2572]: E0702 00:22:39.481139 2572 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:22:39.483889 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:22:39.484112 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:22:40.231177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4207496842.mount: Deactivated successfully. Jul 2 00:22:40.688284 containerd[1714]: time="2024-07-02T00:22:40.688235036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:40.690305 containerd[1714]: time="2024-07-02T00:22:40.690248664Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.6: active requests=0, bytes read=28409342" Jul 2 00:22:40.693566 containerd[1714]: time="2024-07-02T00:22:40.693503809Z" level=info msg="ImageCreate event name:\"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:40.696882 containerd[1714]: time="2024-07-02T00:22:40.696828455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:40.697456 containerd[1714]: time="2024-07-02T00:22:40.697397563Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.6\" with image id \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\", repo tag \"registry.k8s.io/kube-proxy:v1.29.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\", size \"28408353\" in 2.247342703s" Jul 2 00:22:40.697529 containerd[1714]: time="2024-07-02T00:22:40.697455364Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\"" Jul 2 00:22:40.718128 containerd[1714]: time="2024-07-02T00:22:40.718091149Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 00:22:41.244168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount622646627.mount: Deactivated successfully. Jul 2 00:22:42.458823 containerd[1714]: time="2024-07-02T00:22:42.458766241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:42.462639 containerd[1714]: time="2024-07-02T00:22:42.462575802Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jul 2 00:22:42.466085 containerd[1714]: time="2024-07-02T00:22:42.466026057Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:42.470994 containerd[1714]: time="2024-07-02T00:22:42.470962637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:42.472088 containerd[1714]: time="2024-07-02T00:22:42.471984653Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.753854003s" Jul 2 00:22:42.472088 containerd[1714]: time="2024-07-02T00:22:42.472024054Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 00:22:42.497769 containerd[1714]: time="2024-07-02T00:22:42.497730267Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 00:22:43.520009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1934659804.mount: Deactivated successfully. Jul 2 00:22:43.540975 containerd[1714]: time="2024-07-02T00:22:43.540927550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:43.544153 containerd[1714]: time="2024-07-02T00:22:43.544096901Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jul 2 00:22:43.548922 containerd[1714]: time="2024-07-02T00:22:43.548870478Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:43.556919 containerd[1714]: time="2024-07-02T00:22:43.556868707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:43.557718 containerd[1714]: time="2024-07-02T00:22:43.557573518Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.05980245s" Jul 2 00:22:43.557718 containerd[1714]: time="2024-07-02T00:22:43.557612819Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 00:22:43.578579 containerd[1714]: time="2024-07-02T00:22:43.578519155Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 00:22:44.122292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3582728695.mount: Deactivated successfully. Jul 2 00:22:46.420989 containerd[1714]: time="2024-07-02T00:22:46.420934883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:46.422947 containerd[1714]: time="2024-07-02T00:22:46.422890415Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Jul 2 00:22:46.426724 containerd[1714]: time="2024-07-02T00:22:46.426669875Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:46.434448 containerd[1714]: time="2024-07-02T00:22:46.434382899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:46.435624 containerd[1714]: time="2024-07-02T00:22:46.435488417Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.856927562s" Jul 2 00:22:46.435624 containerd[1714]: time="2024-07-02T00:22:46.435526918Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 00:22:49.319630 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:22:49.327719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:22:49.351923 systemd[1]: Reloading requested from client PID 2758 ('systemctl') (unit session-9.scope)... Jul 2 00:22:49.351935 systemd[1]: Reloading... Jul 2 00:22:49.431558 zram_generator::config[2792]: No configuration found. Jul 2 00:22:49.567699 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:22:49.647509 systemd[1]: Reloading finished in 295 ms. Jul 2 00:22:49.766002 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 00:22:49.766133 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 00:22:49.766509 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:22:49.781247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:22:50.969981 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:22:50.975819 (kubelet)[2862]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:22:51.018704 kubelet[2862]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:22:51.018704 kubelet[2862]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:22:51.018704 kubelet[2862]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:22:51.019147 kubelet[2862]: I0702 00:22:51.018745 2862 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:22:51.355300 kubelet[2862]: I0702 00:22:51.355262 2862 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 00:22:51.355300 kubelet[2862]: I0702 00:22:51.355296 2862 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:22:51.355614 kubelet[2862]: I0702 00:22:51.355592 2862 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 00:22:51.735941 kubelet[2862]: E0702 00:22:51.735827 2862 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 00:22:51.736862 kubelet[2862]: I0702 00:22:51.736826 2862 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:22:51.750200 kubelet[2862]: I0702 00:22:51.750177 2862 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:22:51.750490 kubelet[2862]: I0702 00:22:51.750470 2862 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:22:51.750706 kubelet[2862]: I0702 00:22:51.750681 2862 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:22:51.750873 kubelet[2862]: I0702 00:22:51.750715 2862 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:22:51.750873 kubelet[2862]: I0702 00:22:51.750729 2862 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:22:51.750873 kubelet[2862]: I0702 00:22:51.750850 2862 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:22:51.750992 kubelet[2862]: I0702 00:22:51.750973 2862 kubelet.go:396] "Attempting to sync node with API server" Jul 2 00:22:51.750992 kubelet[2862]: I0702 00:22:51.750991 2862 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:22:51.751061 kubelet[2862]: I0702 00:22:51.751021 2862 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:22:51.751061 kubelet[2862]: I0702 00:22:51.751035 2862 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:22:51.753883 kubelet[2862]: W0702 00:22:51.752485 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 00:22:51.753883 kubelet[2862]: E0702 00:22:51.752540 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 00:22:51.753883 kubelet[2862]: W0702 00:22:51.753506 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-a-7b42818af6&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 00:22:51.753883 kubelet[2862]: E0702 00:22:51.753556 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-a-7b42818af6&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 00:22:51.753883 kubelet[2862]: I0702 00:22:51.753632 2862 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:22:51.757790 kubelet[2862]: I0702 00:22:51.757534 2862 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:22:51.757790 kubelet[2862]: W0702 00:22:51.757616 2862 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:22:51.758411 kubelet[2862]: I0702 00:22:51.758266 2862 server.go:1256] "Started kubelet" Jul 2 00:22:51.759164 kubelet[2862]: I0702 00:22:51.759136 2862 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:22:51.765046 kubelet[2862]: I0702 00:22:51.764853 2862 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:22:51.766427 kubelet[2862]: I0702 00:22:51.765823 2862 server.go:461] "Adding debug handlers to kubelet server" Jul 2 00:22:51.767624 kubelet[2862]: I0702 00:22:51.767133 2862 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:22:51.767624 kubelet[2862]: I0702 00:22:51.767332 2862 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:22:51.769623 kubelet[2862]: E0702 00:22:51.769258 2862 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.39:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.39:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3975.1.1-a-7b42818af6.17de3d8a4407a7ce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975.1.1-a-7b42818af6,UID:ci-3975.1.1-a-7b42818af6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975.1.1-a-7b42818af6,},FirstTimestamp:2024-07-02 00:22:51.758241742 +0000 UTC m=+0.778559937,LastTimestamp:2024-07-02 00:22:51.758241742 +0000 UTC m=+0.778559937,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975.1.1-a-7b42818af6,}" Jul 2 00:22:51.771124 kubelet[2862]: I0702 00:22:51.770762 2862 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:22:51.773632 kubelet[2862]: E0702 00:22:51.772321 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-7b42818af6?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="200ms" Jul 2 00:22:51.773632 kubelet[2862]: I0702 00:22:51.772511 2862 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:22:51.773632 kubelet[2862]: I0702 00:22:51.772581 2862 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:22:51.773632 kubelet[2862]: I0702 00:22:51.773011 2862 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:22:51.774265 kubelet[2862]: I0702 00:22:51.774243 2862 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:22:51.774657 kubelet[2862]: W0702 00:22:51.774617 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 00:22:51.774784 kubelet[2862]: E0702 00:22:51.774769 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 00:22:51.775917 kubelet[2862]: E0702 00:22:51.775902 2862 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:22:51.776215 kubelet[2862]: I0702 00:22:51.776199 2862 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:22:51.812129 kubelet[2862]: I0702 00:22:51.812105 2862 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:22:51.812129 kubelet[2862]: I0702 00:22:51.812128 2862 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:22:51.812129 kubelet[2862]: I0702 00:22:51.812146 2862 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:22:51.817977 kubelet[2862]: I0702 00:22:51.817950 2862 policy_none.go:49] "None policy: Start" Jul 2 00:22:51.818451 kubelet[2862]: I0702 00:22:51.818418 2862 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:22:51.818532 kubelet[2862]: I0702 00:22:51.818495 2862 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:22:51.828569 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 00:22:51.843047 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 00:22:51.846705 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 00:22:51.851265 kubelet[2862]: I0702 00:22:51.851159 2862 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:22:51.851342 kubelet[2862]: I0702 00:22:51.851272 2862 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:22:51.852513 kubelet[2862]: I0702 00:22:51.851558 2862 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:22:51.852867 kubelet[2862]: I0702 00:22:51.852852 2862 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:22:51.852961 kubelet[2862]: I0702 00:22:51.852952 2862 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:22:51.853040 kubelet[2862]: I0702 00:22:51.853030 2862 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 00:22:51.853140 kubelet[2862]: E0702 00:22:51.853130 2862 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jul 2 00:22:51.858184 kubelet[2862]: W0702 00:22:51.858160 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 00:22:51.858948 kubelet[2862]: E0702 00:22:51.858922 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 00:22:51.861857 kubelet[2862]: E0702 00:22:51.861839 2862 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3975.1.1-a-7b42818af6\" not found" Jul 2 00:22:51.873328 kubelet[2862]: I0702 00:22:51.873311 2862 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-a-7b42818af6" Jul 2 00:22:51.873631 kubelet[2862]: E0702 00:22:51.873614 2862 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-3975.1.1-a-7b42818af6" Jul 2 00:22:51.954074 kubelet[2862]: I0702 00:22:51.954010 2862 topology_manager.go:215] "Topology Admit Handler" podUID="13af3f3d48ce1a08e90c8b035797f219" podNamespace="kube-system" podName="kube-apiserver-ci-3975.1.1-a-7b42818af6" Jul 2 00:22:51.956098 kubelet[2862]: I0702 00:22:51.956054 2862 topology_manager.go:215] "Topology Admit Handler" podUID="1dc3896f0b3f357472ad4472485710a9" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.1.1-a-7b42818af6" Jul 2 00:22:51.958211 kubelet[2862]: I0702 00:22:51.957821 2862 topology_manager.go:215] "Topology Admit Handler" podUID="2376ec5a8d2fe0b6684067e12b6fa36b" podNamespace="kube-system" podName="kube-scheduler-ci-3975.1.1-a-7b42818af6" Jul 2 00:22:51.965415 systemd[1]: Created slice kubepods-burstable-pod13af3f3d48ce1a08e90c8b035797f219.slice - libcontainer container kubepods-burstable-pod13af3f3d48ce1a08e90c8b035797f219.slice. Jul 2 00:22:51.973263 kubelet[2862]: E0702 00:22:51.973244 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-7b42818af6?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="400ms" Jul 2 00:22:51.975048 kubelet[2862]: I0702 00:22:51.974758 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/13af3f3d48ce1a08e90c8b035797f219-ca-certs\") pod \"kube-apiserver-ci-3975.1.1-a-7b42818af6\" (UID: \"13af3f3d48ce1a08e90c8b035797f219\") " pod="kube-system/kube-apiserver-ci-3975.1.1-a-7b42818af6" Jul 2 00:22:51.975048 kubelet[2862]: I0702 00:22:51.974798 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/13af3f3d48ce1a08e90c8b035797f219-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.1.1-a-7b42818af6\" (UID: \"13af3f3d48ce1a08e90c8b035797f219\") " pod="kube-system/kube-apiserver-ci-3975.1.1-a-7b42818af6" Jul 2 00:22:51.975048 kubelet[2862]: I0702 00:22:51.974828 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1dc3896f0b3f357472ad4472485710a9-ca-certs\") pod \"kube-controller-manager-ci-3975.1.1-a-7b42818af6\" (UID: \"1dc3896f0b3f357472ad4472485710a9\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-7b42818af6" Jul 2 00:22:51.975048 kubelet[2862]: I0702 00:22:51.974858 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1dc3896f0b3f357472ad4472485710a9-kubeconfig\") pod \"kube-controller-manager-ci-3975.1.1-a-7b42818af6\" (UID: \"1dc3896f0b3f357472ad4472485710a9\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-7b42818af6" Jul 2 00:22:51.975048 kubelet[2862]: I0702 00:22:51.974892 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2376ec5a8d2fe0b6684067e12b6fa36b-kubeconfig\") pod \"kube-scheduler-ci-3975.1.1-a-7b42818af6\" (UID: \"2376ec5a8d2fe0b6684067e12b6fa36b\") " pod="kube-system/kube-scheduler-ci-3975.1.1-a-7b42818af6" Jul 2 00:22:51.975294 kubelet[2862]: I0702 00:22:51.974922 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/13af3f3d48ce1a08e90c8b035797f219-k8s-certs\") pod \"kube-apiserver-ci-3975.1.1-a-7b42818af6\" (UID: \"13af3f3d48ce1a08e90c8b035797f219\") " pod="kube-system/kube-apiserver-ci-3975.1.1-a-7b42818af6" Jul 2 00:22:51.975294 kubelet[2862]: I0702 00:22:51.974952 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1dc3896f0b3f357472ad4472485710a9-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.1.1-a-7b42818af6\" (UID: \"1dc3896f0b3f357472ad4472485710a9\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-7b42818af6" Jul 2 00:22:51.975294 kubelet[2862]: I0702 00:22:51.974977 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1dc3896f0b3f357472ad4472485710a9-k8s-certs\") pod \"kube-controller-manager-ci-3975.1.1-a-7b42818af6\" (UID: \"1dc3896f0b3f357472ad4472485710a9\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-7b42818af6" Jul 2 00:22:51.975294 kubelet[2862]: I0702 00:22:51.975006 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1dc3896f0b3f357472ad4472485710a9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.1.1-a-7b42818af6\" (UID: \"1dc3896f0b3f357472ad4472485710a9\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-7b42818af6" Jul 2 00:22:51.977870 systemd[1]: Created slice kubepods-burstable-pod1dc3896f0b3f357472ad4472485710a9.slice - libcontainer container kubepods-burstable-pod1dc3896f0b3f357472ad4472485710a9.slice. Jul 2 00:22:51.981977 systemd[1]: Created slice kubepods-burstable-pod2376ec5a8d2fe0b6684067e12b6fa36b.slice - libcontainer container kubepods-burstable-pod2376ec5a8d2fe0b6684067e12b6fa36b.slice. Jul 2 00:22:52.076848 kubelet[2862]: I0702 00:22:52.076724 2862 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-a-7b42818af6" Jul 2 00:22:52.077318 kubelet[2862]: E0702 00:22:52.077222 2862 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-3975.1.1-a-7b42818af6" Jul 2 00:22:52.276153 containerd[1714]: time="2024-07-02T00:22:52.276096181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.1.1-a-7b42818af6,Uid:13af3f3d48ce1a08e90c8b035797f219,Namespace:kube-system,Attempt:0,}" Jul 2 00:22:52.280766 containerd[1714]: time="2024-07-02T00:22:52.280722852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.1.1-a-7b42818af6,Uid:1dc3896f0b3f357472ad4472485710a9,Namespace:kube-system,Attempt:0,}" Jul 2 00:22:52.284404 containerd[1714]: time="2024-07-02T00:22:52.284328407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.1.1-a-7b42818af6,Uid:2376ec5a8d2fe0b6684067e12b6fa36b,Namespace:kube-system,Attempt:0,}" Jul 2 00:22:52.374569 kubelet[2862]: E0702 00:22:52.374527 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-7b42818af6?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="800ms" Jul 2 00:22:52.479845 kubelet[2862]: I0702 00:22:52.479799 2862 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-a-7b42818af6" Jul 2 00:22:52.480273 kubelet[2862]: E0702 00:22:52.480244 2862 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-3975.1.1-a-7b42818af6" Jul 2 00:22:52.859628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount127202116.mount: Deactivated successfully. Jul 2 00:22:52.890485 containerd[1714]: time="2024-07-02T00:22:52.890417199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:22:52.893252 containerd[1714]: time="2024-07-02T00:22:52.893195741Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jul 2 00:22:52.896168 containerd[1714]: time="2024-07-02T00:22:52.896133186Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:22:52.899511 containerd[1714]: time="2024-07-02T00:22:52.899478237Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:22:52.902585 containerd[1714]: time="2024-07-02T00:22:52.902539484Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:22:52.905869 containerd[1714]: time="2024-07-02T00:22:52.905829735Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:22:52.907838 containerd[1714]: time="2024-07-02T00:22:52.907613862Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:22:52.914167 containerd[1714]: time="2024-07-02T00:22:52.914117862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:22:52.915423 containerd[1714]: time="2024-07-02T00:22:52.914905674Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 630.397965ms" Jul 2 00:22:52.915954 containerd[1714]: time="2024-07-02T00:22:52.915921190Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 639.701307ms" Jul 2 00:22:52.926232 containerd[1714]: time="2024-07-02T00:22:52.926187347Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 645.368594ms" Jul 2 00:22:52.938233 kubelet[2862]: W0702 00:22:52.938202 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 00:22:52.938323 kubelet[2862]: E0702 00:22:52.938243 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 00:22:52.963722 kubelet[2862]: W0702 00:22:52.963666 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-a-7b42818af6&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 00:22:52.963821 kubelet[2862]: E0702 00:22:52.963733 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-a-7b42818af6&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 00:22:53.032582 kubelet[2862]: W0702 00:22:53.032521 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 00:22:53.032582 kubelet[2862]: E0702 00:22:53.032584 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 00:22:53.176099 kubelet[2862]: E0702 00:22:53.175977 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-7b42818af6?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="1.6s" Jul 2 00:22:53.243808 kubelet[2862]: W0702 00:22:53.243696 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 00:22:53.243808 kubelet[2862]: E0702 00:22:53.243768 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 00:22:53.282549 kubelet[2862]: I0702 00:22:53.282517 2862 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-a-7b42818af6" Jul 2 00:22:53.282869 kubelet[2862]: E0702 00:22:53.282846 2862 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-3975.1.1-a-7b42818af6" Jul 2 00:22:53.497814 kubelet[2862]: E0702 00:22:53.497681 2862 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.39:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.39:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3975.1.1-a-7b42818af6.17de3d8a4407a7ce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975.1.1-a-7b42818af6,UID:ci-3975.1.1-a-7b42818af6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975.1.1-a-7b42818af6,},FirstTimestamp:2024-07-02 00:22:51.758241742 +0000 UTC m=+0.778559937,LastTimestamp:2024-07-02 00:22:51.758241742 +0000 UTC m=+0.778559937,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975.1.1-a-7b42818af6,}" Jul 2 00:22:53.753941 kubelet[2862]: E0702 00:22:53.753841 2862 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 00:22:53.897745 containerd[1714]: time="2024-07-02T00:22:53.897413436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:22:53.897745 containerd[1714]: time="2024-07-02T00:22:53.897491338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:53.897745 containerd[1714]: time="2024-07-02T00:22:53.897517638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:22:53.897745 containerd[1714]: time="2024-07-02T00:22:53.897534938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:53.908734 containerd[1714]: time="2024-07-02T00:22:53.903712533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:22:53.908734 containerd[1714]: time="2024-07-02T00:22:53.903786534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:53.908734 containerd[1714]: time="2024-07-02T00:22:53.903812734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:22:53.908734 containerd[1714]: time="2024-07-02T00:22:53.903830835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:53.913651 containerd[1714]: time="2024-07-02T00:22:53.908603508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:22:53.913651 containerd[1714]: time="2024-07-02T00:22:53.908661009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:53.913651 containerd[1714]: time="2024-07-02T00:22:53.908692609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:22:53.913651 containerd[1714]: time="2024-07-02T00:22:53.908712210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:53.962604 systemd[1]: Started cri-containerd-515d555e880e1e4aa031a7615a0cc25e78e7fb44954d9ea5f19e29591cca73bf.scope - libcontainer container 515d555e880e1e4aa031a7615a0cc25e78e7fb44954d9ea5f19e29591cca73bf. Jul 2 00:22:53.964545 systemd[1]: Started cri-containerd-bd383f941dc0da71a0ead1c43c7f61b7639c2dd1bab9929abf1c98b5e3af03f1.scope - libcontainer container bd383f941dc0da71a0ead1c43c7f61b7639c2dd1bab9929abf1c98b5e3af03f1. Jul 2 00:22:53.966107 systemd[1]: Started cri-containerd-d3ea784c90bf3cee6f71e4d630c8f58c74f6ed5c2c3ccb05a93e741a6681cab5.scope - libcontainer container d3ea784c90bf3cee6f71e4d630c8f58c74f6ed5c2c3ccb05a93e741a6681cab5. Jul 2 00:22:54.042160 containerd[1714]: time="2024-07-02T00:22:54.040966937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.1.1-a-7b42818af6,Uid:13af3f3d48ce1a08e90c8b035797f219,Namespace:kube-system,Attempt:0,} returns sandbox id \"515d555e880e1e4aa031a7615a0cc25e78e7fb44954d9ea5f19e29591cca73bf\"" Jul 2 00:22:54.054634 containerd[1714]: time="2024-07-02T00:22:54.054589046Z" level=info msg="CreateContainer within sandbox \"515d555e880e1e4aa031a7615a0cc25e78e7fb44954d9ea5f19e29591cca73bf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 00:22:54.055612 containerd[1714]: time="2024-07-02T00:22:54.055579161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.1.1-a-7b42818af6,Uid:1dc3896f0b3f357472ad4472485710a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3ea784c90bf3cee6f71e4d630c8f58c74f6ed5c2c3ccb05a93e741a6681cab5\"" Jul 2 00:22:54.055946 containerd[1714]: time="2024-07-02T00:22:54.055915966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.1.1-a-7b42818af6,Uid:2376ec5a8d2fe0b6684067e12b6fa36b,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd383f941dc0da71a0ead1c43c7f61b7639c2dd1bab9929abf1c98b5e3af03f1\"" Jul 2 00:22:54.059318 containerd[1714]: time="2024-07-02T00:22:54.059135316Z" level=info msg="CreateContainer within sandbox \"bd383f941dc0da71a0ead1c43c7f61b7639c2dd1bab9929abf1c98b5e3af03f1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 00:22:54.059318 containerd[1714]: time="2024-07-02T00:22:54.059185916Z" level=info msg="CreateContainer within sandbox \"d3ea784c90bf3cee6f71e4d630c8f58c74f6ed5c2c3ccb05a93e741a6681cab5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 00:22:54.143769 containerd[1714]: time="2024-07-02T00:22:54.143711512Z" level=info msg="CreateContainer within sandbox \"bd383f941dc0da71a0ead1c43c7f61b7639c2dd1bab9929abf1c98b5e3af03f1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"61ae8b1c2050c59eda7fc9ad7a6c3815694aec37764564642136ceda756f9b74\"" Jul 2 00:22:54.145350 containerd[1714]: time="2024-07-02T00:22:54.145229636Z" level=info msg="StartContainer for \"61ae8b1c2050c59eda7fc9ad7a6c3815694aec37764564642136ceda756f9b74\"" Jul 2 00:22:54.148454 containerd[1714]: time="2024-07-02T00:22:54.148331483Z" level=info msg="CreateContainer within sandbox \"515d555e880e1e4aa031a7615a0cc25e78e7fb44954d9ea5f19e29591cca73bf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"481efb33a1ea3c2595be470bbfc4f52269ee7dc674c2dadceda510e1dcb642da\"" Jul 2 00:22:54.149420 containerd[1714]: time="2024-07-02T00:22:54.149078995Z" level=info msg="StartContainer for \"481efb33a1ea3c2595be470bbfc4f52269ee7dc674c2dadceda510e1dcb642da\"" Jul 2 00:22:54.152643 containerd[1714]: time="2024-07-02T00:22:54.152603549Z" level=info msg="CreateContainer within sandbox \"d3ea784c90bf3cee6f71e4d630c8f58c74f6ed5c2c3ccb05a93e741a6681cab5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fe2d04f3f3fadedf7e74c2c057908c4deb9e2b5544ddf9aeaf246ea942863e0e\"" Jul 2 00:22:54.153500 containerd[1714]: time="2024-07-02T00:22:54.153475362Z" level=info msg="StartContainer for \"fe2d04f3f3fadedf7e74c2c057908c4deb9e2b5544ddf9aeaf246ea942863e0e\"" Jul 2 00:22:54.185790 systemd[1]: Started cri-containerd-61ae8b1c2050c59eda7fc9ad7a6c3815694aec37764564642136ceda756f9b74.scope - libcontainer container 61ae8b1c2050c59eda7fc9ad7a6c3815694aec37764564642136ceda756f9b74. Jul 2 00:22:54.206623 systemd[1]: Started cri-containerd-fe2d04f3f3fadedf7e74c2c057908c4deb9e2b5544ddf9aeaf246ea942863e0e.scope - libcontainer container fe2d04f3f3fadedf7e74c2c057908c4deb9e2b5544ddf9aeaf246ea942863e0e. Jul 2 00:22:54.216636 systemd[1]: Started cri-containerd-481efb33a1ea3c2595be470bbfc4f52269ee7dc674c2dadceda510e1dcb642da.scope - libcontainer container 481efb33a1ea3c2595be470bbfc4f52269ee7dc674c2dadceda510e1dcb642da. Jul 2 00:22:54.287705 containerd[1714]: time="2024-07-02T00:22:54.287657719Z" level=info msg="StartContainer for \"61ae8b1c2050c59eda7fc9ad7a6c3815694aec37764564642136ceda756f9b74\" returns successfully" Jul 2 00:22:54.300368 containerd[1714]: time="2024-07-02T00:22:54.300189611Z" level=info msg="StartContainer for \"fe2d04f3f3fadedf7e74c2c057908c4deb9e2b5544ddf9aeaf246ea942863e0e\" returns successfully" Jul 2 00:22:54.309771 containerd[1714]: time="2024-07-02T00:22:54.309650856Z" level=info msg="StartContainer for \"481efb33a1ea3c2595be470bbfc4f52269ee7dc674c2dadceda510e1dcb642da\" returns successfully" Jul 2 00:22:54.891780 kubelet[2862]: I0702 00:22:54.891307 2862 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-a-7b42818af6" Jul 2 00:22:56.340137 kubelet[2862]: E0702 00:22:56.340094 2862 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3975.1.1-a-7b42818af6\" not found" node="ci-3975.1.1-a-7b42818af6" Jul 2 00:22:56.469801 kubelet[2862]: I0702 00:22:56.469639 2862 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975.1.1-a-7b42818af6" Jul 2 00:22:56.755420 kubelet[2862]: I0702 00:22:56.755364 2862 apiserver.go:52] "Watching apiserver" Jul 2 00:22:56.774472 kubelet[2862]: I0702 00:22:56.774416 2862 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:22:56.896461 kubelet[2862]: E0702 00:22:56.896410 2862 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3975.1.1-a-7b42818af6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3975.1.1-a-7b42818af6" Jul 2 00:22:58.839138 systemd[1]: Reloading requested from client PID 3141 ('systemctl') (unit session-9.scope)... Jul 2 00:22:58.839539 systemd[1]: Reloading... Jul 2 00:22:58.944495 zram_generator::config[3181]: No configuration found. Jul 2 00:22:59.062958 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:22:59.157621 systemd[1]: Reloading finished in 317 ms. Jul 2 00:22:59.204219 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:22:59.220988 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:22:59.221247 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:22:59.226871 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:22:59.923646 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:22:59.930205 (kubelet)[3245]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:22:59.993973 kubelet[3245]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:22:59.993973 kubelet[3245]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:22:59.993973 kubelet[3245]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:22:59.993973 kubelet[3245]: I0702 00:22:59.993754 3245 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:22:59.998303 kubelet[3245]: I0702 00:22:59.998273 3245 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 00:22:59.998303 kubelet[3245]: I0702 00:22:59.998296 3245 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:22:59.998541 kubelet[3245]: I0702 00:22:59.998520 3245 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 00:22:59.999865 kubelet[3245]: I0702 00:22:59.999840 3245 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:23:00.316279 kubelet[3245]: I0702 00:23:00.316136 3245 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:23:00.325799 kubelet[3245]: I0702 00:23:00.325769 3245 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:23:00.326034 kubelet[3245]: I0702 00:23:00.326013 3245 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:23:00.326216 kubelet[3245]: I0702 00:23:00.326195 3245 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:23:00.326364 kubelet[3245]: I0702 00:23:00.326226 3245 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:23:00.326364 kubelet[3245]: I0702 00:23:00.326239 3245 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:23:00.326364 kubelet[3245]: I0702 00:23:00.326292 3245 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:23:00.326509 kubelet[3245]: I0702 00:23:00.326413 3245 kubelet.go:396] "Attempting to sync node with API server" Jul 2 00:23:00.326967 kubelet[3245]: I0702 00:23:00.326843 3245 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:23:00.326967 kubelet[3245]: I0702 00:23:00.326886 3245 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:23:00.326967 kubelet[3245]: I0702 00:23:00.326904 3245 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:23:00.331463 kubelet[3245]: I0702 00:23:00.327981 3245 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:23:00.331463 kubelet[3245]: I0702 00:23:00.328177 3245 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:23:00.331463 kubelet[3245]: I0702 00:23:00.328690 3245 server.go:1256] "Started kubelet" Jul 2 00:23:00.333764 kubelet[3245]: I0702 00:23:00.333741 3245 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:23:00.342006 kubelet[3245]: I0702 00:23:00.341415 3245 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:23:00.342334 kubelet[3245]: I0702 00:23:00.342315 3245 server.go:461] "Adding debug handlers to kubelet server" Jul 2 00:23:00.345640 kubelet[3245]: I0702 00:23:00.343591 3245 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:23:00.345640 kubelet[3245]: I0702 00:23:00.343760 3245 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:23:00.345640 kubelet[3245]: I0702 00:23:00.345511 3245 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:23:00.346923 kubelet[3245]: I0702 00:23:00.346908 3245 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:23:00.347095 kubelet[3245]: I0702 00:23:00.347081 3245 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:23:00.350483 kubelet[3245]: I0702 00:23:00.348453 3245 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:23:00.350683 kubelet[3245]: I0702 00:23:00.350665 3245 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:23:00.357163 kubelet[3245]: I0702 00:23:00.356006 3245 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:23:00.358386 kubelet[3245]: I0702 00:23:00.358366 3245 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:23:00.358482 kubelet[3245]: I0702 00:23:00.358397 3245 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:23:00.358482 kubelet[3245]: I0702 00:23:00.358415 3245 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 00:23:00.358616 kubelet[3245]: E0702 00:23:00.358603 3245 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:23:00.363760 kubelet[3245]: E0702 00:23:00.363731 3245 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:23:00.366586 kubelet[3245]: I0702 00:23:00.366563 3245 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:23:00.405110 kubelet[3245]: I0702 00:23:00.405088 3245 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:23:00.405110 kubelet[3245]: I0702 00:23:00.405111 3245 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:23:00.405267 kubelet[3245]: I0702 00:23:00.405129 3245 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:23:00.405313 kubelet[3245]: I0702 00:23:00.405281 3245 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:23:00.405313 kubelet[3245]: I0702 00:23:00.405306 3245 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:23:00.405552 kubelet[3245]: I0702 00:23:00.405315 3245 policy_none.go:49] "None policy: Start" Jul 2 00:23:00.405905 kubelet[3245]: I0702 00:23:00.405844 3245 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:23:00.405905 kubelet[3245]: I0702 00:23:00.405871 3245 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:23:00.406064 kubelet[3245]: I0702 00:23:00.406043 3245 state_mem.go:75] "Updated machine memory state" Jul 2 00:23:00.410006 kubelet[3245]: I0702 00:23:00.409987 3245 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:23:00.410503 kubelet[3245]: I0702 00:23:00.410226 3245 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:23:00.450391 kubelet[3245]: I0702 00:23:00.449477 3245 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-a-7b42818af6" Jul 2 00:23:00.459991 kubelet[3245]: I0702 00:23:00.459237 3245 topology_manager.go:215] "Topology Admit Handler" podUID="13af3f3d48ce1a08e90c8b035797f219" podNamespace="kube-system" podName="kube-apiserver-ci-3975.1.1-a-7b42818af6" Jul 2 00:23:00.463453 kubelet[3245]: I0702 00:23:00.460178 3245 topology_manager.go:215] "Topology Admit Handler" podUID="1dc3896f0b3f357472ad4472485710a9" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.1.1-a-7b42818af6" Jul 2 00:23:00.463453 kubelet[3245]: I0702 00:23:00.460278 3245 topology_manager.go:215] "Topology Admit Handler" podUID="2376ec5a8d2fe0b6684067e12b6fa36b" podNamespace="kube-system" podName="kube-scheduler-ci-3975.1.1-a-7b42818af6" Jul 2 00:23:00.464197 kubelet[3245]: I0702 00:23:00.464172 3245 kubelet_node_status.go:112] "Node was previously registered" node="ci-3975.1.1-a-7b42818af6" Jul 2 00:23:00.464381 kubelet[3245]: I0702 00:23:00.464364 3245 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975.1.1-a-7b42818af6" Jul 2 00:23:00.469945 kubelet[3245]: W0702 00:23:00.469926 3245 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:23:00.473459 kubelet[3245]: W0702 00:23:00.472463 3245 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:23:00.473723 kubelet[3245]: W0702 00:23:00.473690 3245 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:23:00.647752 kubelet[3245]: I0702 00:23:00.647704 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2376ec5a8d2fe0b6684067e12b6fa36b-kubeconfig\") pod \"kube-scheduler-ci-3975.1.1-a-7b42818af6\" (UID: \"2376ec5a8d2fe0b6684067e12b6fa36b\") " pod="kube-system/kube-scheduler-ci-3975.1.1-a-7b42818af6" Jul 2 00:23:00.648067 kubelet[3245]: I0702 00:23:00.647780 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/13af3f3d48ce1a08e90c8b035797f219-k8s-certs\") pod \"kube-apiserver-ci-3975.1.1-a-7b42818af6\" (UID: \"13af3f3d48ce1a08e90c8b035797f219\") " pod="kube-system/kube-apiserver-ci-3975.1.1-a-7b42818af6" Jul 2 00:23:00.648067 kubelet[3245]: I0702 00:23:00.647849 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/13af3f3d48ce1a08e90c8b035797f219-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.1.1-a-7b42818af6\" (UID: \"13af3f3d48ce1a08e90c8b035797f219\") " pod="kube-system/kube-apiserver-ci-3975.1.1-a-7b42818af6" Jul 2 00:23:00.648067 kubelet[3245]: I0702 00:23:00.647902 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1dc3896f0b3f357472ad4472485710a9-ca-certs\") pod \"kube-controller-manager-ci-3975.1.1-a-7b42818af6\" (UID: \"1dc3896f0b3f357472ad4472485710a9\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-7b42818af6" Jul 2 00:23:00.648067 kubelet[3245]: I0702 00:23:00.647950 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/13af3f3d48ce1a08e90c8b035797f219-ca-certs\") pod \"kube-apiserver-ci-3975.1.1-a-7b42818af6\" (UID: \"13af3f3d48ce1a08e90c8b035797f219\") " pod="kube-system/kube-apiserver-ci-3975.1.1-a-7b42818af6" Jul 2 00:23:00.648067 kubelet[3245]: I0702 00:23:00.647991 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1dc3896f0b3f357472ad4472485710a9-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.1.1-a-7b42818af6\" (UID: \"1dc3896f0b3f357472ad4472485710a9\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-7b42818af6" Jul 2 00:23:00.648368 kubelet[3245]: I0702 00:23:00.648030 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1dc3896f0b3f357472ad4472485710a9-k8s-certs\") pod \"kube-controller-manager-ci-3975.1.1-a-7b42818af6\" (UID: \"1dc3896f0b3f357472ad4472485710a9\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-7b42818af6" Jul 2 00:23:00.648368 kubelet[3245]: I0702 00:23:00.648099 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1dc3896f0b3f357472ad4472485710a9-kubeconfig\") pod \"kube-controller-manager-ci-3975.1.1-a-7b42818af6\" (UID: \"1dc3896f0b3f357472ad4472485710a9\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-7b42818af6" Jul 2 00:23:00.648368 kubelet[3245]: I0702 00:23:00.648140 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1dc3896f0b3f357472ad4472485710a9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.1.1-a-7b42818af6\" (UID: \"1dc3896f0b3f357472ad4472485710a9\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-7b42818af6" Jul 2 00:23:01.328032 kubelet[3245]: I0702 00:23:01.327998 3245 apiserver.go:52] "Watching apiserver" Jul 2 00:23:01.347974 kubelet[3245]: I0702 00:23:01.347938 3245 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:23:01.415678 kubelet[3245]: I0702 00:23:01.415644 3245 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3975.1.1-a-7b42818af6" podStartSLOduration=1.4155966150000001 podStartE2EDuration="1.415596615s" podCreationTimestamp="2024-07-02 00:23:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:23:01.415341311 +0000 UTC m=+1.479961701" watchObservedRunningTime="2024-07-02 00:23:01.415596615 +0000 UTC m=+1.480217005" Jul 2 00:23:01.429308 kubelet[3245]: I0702 00:23:01.429244 3245 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3975.1.1-a-7b42818af6" podStartSLOduration=1.429198021 podStartE2EDuration="1.429198021s" podCreationTimestamp="2024-07-02 00:23:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:23:01.429017818 +0000 UTC m=+1.493638108" watchObservedRunningTime="2024-07-02 00:23:01.429198021 +0000 UTC m=+1.493818311" Jul 2 00:23:01.460007 kubelet[3245]: I0702 00:23:01.459498 3245 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3975.1.1-a-7b42818af6" podStartSLOduration=1.459385978 podStartE2EDuration="1.459385978s" podCreationTimestamp="2024-07-02 00:23:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:23:01.459096073 +0000 UTC m=+1.523716463" watchObservedRunningTime="2024-07-02 00:23:01.459385978 +0000 UTC m=+1.524006368" Jul 2 00:23:07.135607 sudo[2335]: pam_unix(sudo:session): session closed for user root Jul 2 00:23:07.239266 sshd[2332]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:07.242906 systemd[1]: sshd@6-10.200.8.39:22-10.200.16.10:49702.service: Deactivated successfully. Jul 2 00:23:07.245405 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:23:07.245782 systemd[1]: session-9.scope: Consumed 4.350s CPU time, 141.1M memory peak, 0B memory swap peak. Jul 2 00:23:07.247409 systemd-logind[1692]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:23:07.248749 systemd-logind[1692]: Removed session 9. Jul 2 00:23:12.773203 kubelet[3245]: I0702 00:23:12.773143 3245 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:23:12.773990 kubelet[3245]: I0702 00:23:12.773817 3245 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:23:12.774070 containerd[1714]: time="2024-07-02T00:23:12.773612564Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:23:13.547313 kubelet[3245]: I0702 00:23:13.545812 3245 topology_manager.go:215] "Topology Admit Handler" podUID="39c7895a-3acd-475b-abf1-b4bec4df585b" podNamespace="kube-system" podName="kube-proxy-m7bvn" Jul 2 00:23:13.558784 systemd[1]: Created slice kubepods-besteffort-pod39c7895a_3acd_475b_abf1_b4bec4df585b.slice - libcontainer container kubepods-besteffort-pod39c7895a_3acd_475b_abf1_b4bec4df585b.slice. Jul 2 00:23:13.636021 kubelet[3245]: I0702 00:23:13.635976 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/39c7895a-3acd-475b-abf1-b4bec4df585b-kube-proxy\") pod \"kube-proxy-m7bvn\" (UID: \"39c7895a-3acd-475b-abf1-b4bec4df585b\") " pod="kube-system/kube-proxy-m7bvn" Jul 2 00:23:13.636178 kubelet[3245]: I0702 00:23:13.636039 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxrgn\" (UniqueName: \"kubernetes.io/projected/39c7895a-3acd-475b-abf1-b4bec4df585b-kube-api-access-zxrgn\") pod \"kube-proxy-m7bvn\" (UID: \"39c7895a-3acd-475b-abf1-b4bec4df585b\") " pod="kube-system/kube-proxy-m7bvn" Jul 2 00:23:13.636178 kubelet[3245]: I0702 00:23:13.636070 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39c7895a-3acd-475b-abf1-b4bec4df585b-xtables-lock\") pod \"kube-proxy-m7bvn\" (UID: \"39c7895a-3acd-475b-abf1-b4bec4df585b\") " pod="kube-system/kube-proxy-m7bvn" Jul 2 00:23:13.636178 kubelet[3245]: I0702 00:23:13.636096 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39c7895a-3acd-475b-abf1-b4bec4df585b-lib-modules\") pod \"kube-proxy-m7bvn\" (UID: \"39c7895a-3acd-475b-abf1-b4bec4df585b\") " pod="kube-system/kube-proxy-m7bvn" Jul 2 00:23:13.868470 containerd[1714]: time="2024-07-02T00:23:13.866184977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m7bvn,Uid:39c7895a-3acd-475b-abf1-b4bec4df585b,Namespace:kube-system,Attempt:0,}" Jul 2 00:23:13.880682 kubelet[3245]: I0702 00:23:13.880609 3245 topology_manager.go:215] "Topology Admit Handler" podUID="83c9a88f-213e-4153-ae3b-3e5c8407a5ef" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-v68zg" Jul 2 00:23:13.894480 systemd[1]: Created slice kubepods-besteffort-pod83c9a88f_213e_4153_ae3b_3e5c8407a5ef.slice - libcontainer container kubepods-besteffort-pod83c9a88f_213e_4153_ae3b_3e5c8407a5ef.slice. Jul 2 00:23:13.937628 kubelet[3245]: I0702 00:23:13.937584 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/83c9a88f-213e-4153-ae3b-3e5c8407a5ef-var-lib-calico\") pod \"tigera-operator-76c4974c85-v68zg\" (UID: \"83c9a88f-213e-4153-ae3b-3e5c8407a5ef\") " pod="tigera-operator/tigera-operator-76c4974c85-v68zg" Jul 2 00:23:13.937869 kubelet[3245]: I0702 00:23:13.937847 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72849\" (UniqueName: \"kubernetes.io/projected/83c9a88f-213e-4153-ae3b-3e5c8407a5ef-kube-api-access-72849\") pod \"tigera-operator-76c4974c85-v68zg\" (UID: \"83c9a88f-213e-4153-ae3b-3e5c8407a5ef\") " pod="tigera-operator/tigera-operator-76c4974c85-v68zg" Jul 2 00:23:13.967707 containerd[1714]: time="2024-07-02T00:23:13.967597936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:23:13.967707 containerd[1714]: time="2024-07-02T00:23:13.967648937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:13.967707 containerd[1714]: time="2024-07-02T00:23:13.967666537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:23:13.967707 containerd[1714]: time="2024-07-02T00:23:13.967681137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:13.996593 systemd[1]: Started cri-containerd-1c9eea0b0bf7bcc8c2377fb3d5b0080e354050dad6c61c9f6b3f8c9e83aa63c0.scope - libcontainer container 1c9eea0b0bf7bcc8c2377fb3d5b0080e354050dad6c61c9f6b3f8c9e83aa63c0. Jul 2 00:23:14.019195 containerd[1714]: time="2024-07-02T00:23:14.019154077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m7bvn,Uid:39c7895a-3acd-475b-abf1-b4bec4df585b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c9eea0b0bf7bcc8c2377fb3d5b0080e354050dad6c61c9f6b3f8c9e83aa63c0\"" Jul 2 00:23:14.022155 containerd[1714]: time="2024-07-02T00:23:14.022104020Z" level=info msg="CreateContainer within sandbox \"1c9eea0b0bf7bcc8c2377fb3d5b0080e354050dad6c61c9f6b3f8c9e83aa63c0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:23:14.067692 containerd[1714]: time="2024-07-02T00:23:14.067641275Z" level=info msg="CreateContainer within sandbox \"1c9eea0b0bf7bcc8c2377fb3d5b0080e354050dad6c61c9f6b3f8c9e83aa63c0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2f76bd6f3f07abeddd19f9fdb8bff7ef733a139ea013b6b27ce3f2079e03fee0\"" Jul 2 00:23:14.068366 containerd[1714]: time="2024-07-02T00:23:14.068279584Z" level=info msg="StartContainer for \"2f76bd6f3f07abeddd19f9fdb8bff7ef733a139ea013b6b27ce3f2079e03fee0\"" Jul 2 00:23:14.096591 systemd[1]: Started cri-containerd-2f76bd6f3f07abeddd19f9fdb8bff7ef733a139ea013b6b27ce3f2079e03fee0.scope - libcontainer container 2f76bd6f3f07abeddd19f9fdb8bff7ef733a139ea013b6b27ce3f2079e03fee0. Jul 2 00:23:14.129518 containerd[1714]: time="2024-07-02T00:23:14.129399263Z" level=info msg="StartContainer for \"2f76bd6f3f07abeddd19f9fdb8bff7ef733a139ea013b6b27ce3f2079e03fee0\" returns successfully" Jul 2 00:23:14.198015 containerd[1714]: time="2024-07-02T00:23:14.197979749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-v68zg,Uid:83c9a88f-213e-4153-ae3b-3e5c8407a5ef,Namespace:tigera-operator,Attempt:0,}" Jul 2 00:23:14.264886 containerd[1714]: time="2024-07-02T00:23:14.264786710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:23:14.264886 containerd[1714]: time="2024-07-02T00:23:14.264888112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:14.265094 containerd[1714]: time="2024-07-02T00:23:14.264920512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:23:14.265094 containerd[1714]: time="2024-07-02T00:23:14.264962413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:14.281621 systemd[1]: Started cri-containerd-2d44831275061849d6351bed02c5c4ef036ba28d79f24e6bd1812c773b505917.scope - libcontainer container 2d44831275061849d6351bed02c5c4ef036ba28d79f24e6bd1812c773b505917. Jul 2 00:23:14.337703 containerd[1714]: time="2024-07-02T00:23:14.337649358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-v68zg,Uid:83c9a88f-213e-4153-ae3b-3e5c8407a5ef,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2d44831275061849d6351bed02c5c4ef036ba28d79f24e6bd1812c773b505917\"" Jul 2 00:23:14.339551 containerd[1714]: time="2024-07-02T00:23:14.339523085Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jul 2 00:23:14.433416 kubelet[3245]: I0702 00:23:14.433302 3245 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-m7bvn" podStartSLOduration=1.433262033 podStartE2EDuration="1.433262033s" podCreationTimestamp="2024-07-02 00:23:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:23:14.43306233 +0000 UTC m=+14.497682720" watchObservedRunningTime="2024-07-02 00:23:14.433262033 +0000 UTC m=+14.497882423" Jul 2 00:23:16.499273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2862056175.mount: Deactivated successfully. Jul 2 00:23:17.070632 containerd[1714]: time="2024-07-02T00:23:17.070574828Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:17.073481 containerd[1714]: time="2024-07-02T00:23:17.073402163Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076048" Jul 2 00:23:17.077837 containerd[1714]: time="2024-07-02T00:23:17.077777017Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:17.082313 containerd[1714]: time="2024-07-02T00:23:17.082253272Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:17.083570 containerd[1714]: time="2024-07-02T00:23:17.082998481Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.743438796s" Jul 2 00:23:17.083570 containerd[1714]: time="2024-07-02T00:23:17.083038882Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jul 2 00:23:17.084960 containerd[1714]: time="2024-07-02T00:23:17.084935105Z" level=info msg="CreateContainer within sandbox \"2d44831275061849d6351bed02c5c4ef036ba28d79f24e6bd1812c773b505917\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 2 00:23:17.123258 containerd[1714]: time="2024-07-02T00:23:17.123216677Z" level=info msg="CreateContainer within sandbox \"2d44831275061849d6351bed02c5c4ef036ba28d79f24e6bd1812c773b505917\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"50bcda646d534c145ea869066c3f95dbb4bb1aed150651a6e0b2611f2b76a0ef\"" Jul 2 00:23:17.123795 containerd[1714]: time="2024-07-02T00:23:17.123614682Z" level=info msg="StartContainer for \"50bcda646d534c145ea869066c3f95dbb4bb1aed150651a6e0b2611f2b76a0ef\"" Jul 2 00:23:17.154596 systemd[1]: Started cri-containerd-50bcda646d534c145ea869066c3f95dbb4bb1aed150651a6e0b2611f2b76a0ef.scope - libcontainer container 50bcda646d534c145ea869066c3f95dbb4bb1aed150651a6e0b2611f2b76a0ef. Jul 2 00:23:17.178871 containerd[1714]: time="2024-07-02T00:23:17.178829863Z" level=info msg="StartContainer for \"50bcda646d534c145ea869066c3f95dbb4bb1aed150651a6e0b2611f2b76a0ef\" returns successfully" Jul 2 00:23:20.356472 kubelet[3245]: I0702 00:23:20.353632 3245 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-v68zg" podStartSLOduration=4.609326425 podStartE2EDuration="7.353581031s" podCreationTimestamp="2024-07-02 00:23:13 +0000 UTC" firstStartedPulling="2024-07-02 00:23:14.339079679 +0000 UTC m=+14.403699969" lastFinishedPulling="2024-07-02 00:23:17.083334285 +0000 UTC m=+17.147954575" observedRunningTime="2024-07-02 00:23:17.439284677 +0000 UTC m=+17.503905067" watchObservedRunningTime="2024-07-02 00:23:20.353581031 +0000 UTC m=+20.418201421" Jul 2 00:23:20.356472 kubelet[3245]: I0702 00:23:20.353777 3245 topology_manager.go:215] "Topology Admit Handler" podUID="ea8e4fad-5503-4870-9674-b12d7e93ef34" podNamespace="calico-system" podName="calico-typha-b844fcdb7-6g2s9" Jul 2 00:23:20.368668 systemd[1]: Created slice kubepods-besteffort-podea8e4fad_5503_4870_9674_b12d7e93ef34.slice - libcontainer container kubepods-besteffort-podea8e4fad_5503_4870_9674_b12d7e93ef34.slice. Jul 2 00:23:20.382885 kubelet[3245]: I0702 00:23:20.382843 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea8e4fad-5503-4870-9674-b12d7e93ef34-tigera-ca-bundle\") pod \"calico-typha-b844fcdb7-6g2s9\" (UID: \"ea8e4fad-5503-4870-9674-b12d7e93ef34\") " pod="calico-system/calico-typha-b844fcdb7-6g2s9" Jul 2 00:23:20.383016 kubelet[3245]: I0702 00:23:20.382951 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6msp\" (UniqueName: \"kubernetes.io/projected/ea8e4fad-5503-4870-9674-b12d7e93ef34-kube-api-access-s6msp\") pod \"calico-typha-b844fcdb7-6g2s9\" (UID: \"ea8e4fad-5503-4870-9674-b12d7e93ef34\") " pod="calico-system/calico-typha-b844fcdb7-6g2s9" Jul 2 00:23:20.383071 kubelet[3245]: I0702 00:23:20.383021 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ea8e4fad-5503-4870-9674-b12d7e93ef34-typha-certs\") pod \"calico-typha-b844fcdb7-6g2s9\" (UID: \"ea8e4fad-5503-4870-9674-b12d7e93ef34\") " pod="calico-system/calico-typha-b844fcdb7-6g2s9" Jul 2 00:23:20.448457 kubelet[3245]: I0702 00:23:20.446023 3245 topology_manager.go:215] "Topology Admit Handler" podUID="b9521cb9-ba88-416e-a7b7-6deacf74b146" podNamespace="calico-system" podName="calico-node-mkxcg" Jul 2 00:23:20.458270 systemd[1]: Created slice kubepods-besteffort-podb9521cb9_ba88_416e_a7b7_6deacf74b146.slice - libcontainer container kubepods-besteffort-podb9521cb9_ba88_416e_a7b7_6deacf74b146.slice. Jul 2 00:23:20.483193 kubelet[3245]: I0702 00:23:20.483159 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs4ks\" (UniqueName: \"kubernetes.io/projected/b9521cb9-ba88-416e-a7b7-6deacf74b146-kube-api-access-rs4ks\") pod \"calico-node-mkxcg\" (UID: \"b9521cb9-ba88-416e-a7b7-6deacf74b146\") " pod="calico-system/calico-node-mkxcg" Jul 2 00:23:20.483694 kubelet[3245]: I0702 00:23:20.483342 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9521cb9-ba88-416e-a7b7-6deacf74b146-xtables-lock\") pod \"calico-node-mkxcg\" (UID: \"b9521cb9-ba88-416e-a7b7-6deacf74b146\") " pod="calico-system/calico-node-mkxcg" Jul 2 00:23:20.483694 kubelet[3245]: I0702 00:23:20.483481 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b9521cb9-ba88-416e-a7b7-6deacf74b146-policysync\") pod \"calico-node-mkxcg\" (UID: \"b9521cb9-ba88-416e-a7b7-6deacf74b146\") " pod="calico-system/calico-node-mkxcg" Jul 2 00:23:20.483694 kubelet[3245]: I0702 00:23:20.483532 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b9521cb9-ba88-416e-a7b7-6deacf74b146-cni-log-dir\") pod \"calico-node-mkxcg\" (UID: \"b9521cb9-ba88-416e-a7b7-6deacf74b146\") " pod="calico-system/calico-node-mkxcg" Jul 2 00:23:20.483694 kubelet[3245]: I0702 00:23:20.483623 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9521cb9-ba88-416e-a7b7-6deacf74b146-tigera-ca-bundle\") pod \"calico-node-mkxcg\" (UID: \"b9521cb9-ba88-416e-a7b7-6deacf74b146\") " pod="calico-system/calico-node-mkxcg" Jul 2 00:23:20.484462 kubelet[3245]: I0702 00:23:20.484094 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b9521cb9-ba88-416e-a7b7-6deacf74b146-cni-bin-dir\") pod \"calico-node-mkxcg\" (UID: \"b9521cb9-ba88-416e-a7b7-6deacf74b146\") " pod="calico-system/calico-node-mkxcg" Jul 2 00:23:20.484462 kubelet[3245]: I0702 00:23:20.484156 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9521cb9-ba88-416e-a7b7-6deacf74b146-lib-modules\") pod \"calico-node-mkxcg\" (UID: \"b9521cb9-ba88-416e-a7b7-6deacf74b146\") " pod="calico-system/calico-node-mkxcg" Jul 2 00:23:20.484462 kubelet[3245]: I0702 00:23:20.484189 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b9521cb9-ba88-416e-a7b7-6deacf74b146-var-lib-calico\") pod \"calico-node-mkxcg\" (UID: \"b9521cb9-ba88-416e-a7b7-6deacf74b146\") " pod="calico-system/calico-node-mkxcg" Jul 2 00:23:20.484462 kubelet[3245]: I0702 00:23:20.484227 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b9521cb9-ba88-416e-a7b7-6deacf74b146-flexvol-driver-host\") pod \"calico-node-mkxcg\" (UID: \"b9521cb9-ba88-416e-a7b7-6deacf74b146\") " pod="calico-system/calico-node-mkxcg" Jul 2 00:23:20.484462 kubelet[3245]: I0702 00:23:20.484272 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b9521cb9-ba88-416e-a7b7-6deacf74b146-node-certs\") pod \"calico-node-mkxcg\" (UID: \"b9521cb9-ba88-416e-a7b7-6deacf74b146\") " pod="calico-system/calico-node-mkxcg" Jul 2 00:23:20.484837 kubelet[3245]: I0702 00:23:20.484803 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b9521cb9-ba88-416e-a7b7-6deacf74b146-cni-net-dir\") pod \"calico-node-mkxcg\" (UID: \"b9521cb9-ba88-416e-a7b7-6deacf74b146\") " pod="calico-system/calico-node-mkxcg" Jul 2 00:23:20.487741 kubelet[3245]: I0702 00:23:20.487719 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b9521cb9-ba88-416e-a7b7-6deacf74b146-var-run-calico\") pod \"calico-node-mkxcg\" (UID: \"b9521cb9-ba88-416e-a7b7-6deacf74b146\") " pod="calico-system/calico-node-mkxcg" Jul 2 00:23:20.567140 kubelet[3245]: I0702 00:23:20.567094 3245 topology_manager.go:215] "Topology Admit Handler" podUID="d83f546f-f0c6-4f7a-a190-1895a85550b7" podNamespace="calico-system" podName="csi-node-driver-46k9p" Jul 2 00:23:20.567647 kubelet[3245]: E0702 00:23:20.567623 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46k9p" podUID="d83f546f-f0c6-4f7a-a190-1895a85550b7" Jul 2 00:23:20.590278 kubelet[3245]: I0702 00:23:20.587962 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d83f546f-f0c6-4f7a-a190-1895a85550b7-kubelet-dir\") pod \"csi-node-driver-46k9p\" (UID: \"d83f546f-f0c6-4f7a-a190-1895a85550b7\") " pod="calico-system/csi-node-driver-46k9p" Jul 2 00:23:20.590278 kubelet[3245]: I0702 00:23:20.588039 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d83f546f-f0c6-4f7a-a190-1895a85550b7-socket-dir\") pod \"csi-node-driver-46k9p\" (UID: \"d83f546f-f0c6-4f7a-a190-1895a85550b7\") " pod="calico-system/csi-node-driver-46k9p" Jul 2 00:23:20.590278 kubelet[3245]: I0702 00:23:20.588075 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg9td\" (UniqueName: \"kubernetes.io/projected/d83f546f-f0c6-4f7a-a190-1895a85550b7-kube-api-access-wg9td\") pod \"csi-node-driver-46k9p\" (UID: \"d83f546f-f0c6-4f7a-a190-1895a85550b7\") " pod="calico-system/csi-node-driver-46k9p" Jul 2 00:23:20.590278 kubelet[3245]: I0702 00:23:20.588142 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d83f546f-f0c6-4f7a-a190-1895a85550b7-varrun\") pod \"csi-node-driver-46k9p\" (UID: \"d83f546f-f0c6-4f7a-a190-1895a85550b7\") " pod="calico-system/csi-node-driver-46k9p" Jul 2 00:23:20.590278 kubelet[3245]: I0702 00:23:20.588168 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d83f546f-f0c6-4f7a-a190-1895a85550b7-registration-dir\") pod \"csi-node-driver-46k9p\" (UID: \"d83f546f-f0c6-4f7a-a190-1895a85550b7\") " pod="calico-system/csi-node-driver-46k9p" Jul 2 00:23:20.591459 kubelet[3245]: E0702 00:23:20.591414 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.592133 kubelet[3245]: W0702 00:23:20.592095 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.592305 kubelet[3245]: E0702 00:23:20.592289 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.592625 kubelet[3245]: E0702 00:23:20.592605 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.592625 kubelet[3245]: W0702 00:23:20.592624 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.592748 kubelet[3245]: E0702 00:23:20.592645 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.593081 kubelet[3245]: E0702 00:23:20.593061 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.593081 kubelet[3245]: W0702 00:23:20.593079 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.593201 kubelet[3245]: E0702 00:23:20.593167 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.594684 kubelet[3245]: E0702 00:23:20.594664 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.594684 kubelet[3245]: W0702 00:23:20.594683 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.594938 kubelet[3245]: E0702 00:23:20.594771 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.594998 kubelet[3245]: E0702 00:23:20.594961 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.594998 kubelet[3245]: W0702 00:23:20.594974 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.595096 kubelet[3245]: E0702 00:23:20.595062 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.595252 kubelet[3245]: E0702 00:23:20.595234 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.595252 kubelet[3245]: W0702 00:23:20.595247 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.595369 kubelet[3245]: E0702 00:23:20.595333 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.595519 kubelet[3245]: E0702 00:23:20.595504 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.595591 kubelet[3245]: W0702 00:23:20.595520 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.595638 kubelet[3245]: E0702 00:23:20.595606 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.596471 kubelet[3245]: E0702 00:23:20.595769 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.596471 kubelet[3245]: W0702 00:23:20.595781 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.596471 kubelet[3245]: E0702 00:23:20.595866 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.596471 kubelet[3245]: E0702 00:23:20.596000 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.596471 kubelet[3245]: W0702 00:23:20.596008 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.596471 kubelet[3245]: E0702 00:23:20.596093 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.596471 kubelet[3245]: E0702 00:23:20.596241 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.596471 kubelet[3245]: W0702 00:23:20.596250 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.596471 kubelet[3245]: E0702 00:23:20.596346 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.597015 kubelet[3245]: E0702 00:23:20.596493 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.597015 kubelet[3245]: W0702 00:23:20.596502 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.597015 kubelet[3245]: E0702 00:23:20.596587 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.597015 kubelet[3245]: E0702 00:23:20.596761 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.597015 kubelet[3245]: W0702 00:23:20.596771 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.597015 kubelet[3245]: E0702 00:23:20.596806 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.597331 kubelet[3245]: E0702 00:23:20.597037 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.597331 kubelet[3245]: W0702 00:23:20.597047 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.597331 kubelet[3245]: E0702 00:23:20.597076 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.597331 kubelet[3245]: E0702 00:23:20.597323 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.597331 kubelet[3245]: W0702 00:23:20.597333 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.598153 kubelet[3245]: E0702 00:23:20.597349 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.598153 kubelet[3245]: E0702 00:23:20.597785 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.598153 kubelet[3245]: W0702 00:23:20.597797 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.598153 kubelet[3245]: E0702 00:23:20.597813 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.598612 kubelet[3245]: E0702 00:23:20.598588 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.598612 kubelet[3245]: W0702 00:23:20.598607 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.598747 kubelet[3245]: E0702 00:23:20.598624 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.600528 kubelet[3245]: E0702 00:23:20.600507 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.600528 kubelet[3245]: W0702 00:23:20.600527 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.600658 kubelet[3245]: E0702 00:23:20.600543 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.606471 kubelet[3245]: E0702 00:23:20.601935 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.606471 kubelet[3245]: W0702 00:23:20.601952 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.606471 kubelet[3245]: E0702 00:23:20.601968 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.621140 kubelet[3245]: E0702 00:23:20.621049 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.621140 kubelet[3245]: W0702 00:23:20.621075 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.621140 kubelet[3245]: E0702 00:23:20.621097 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.673735 containerd[1714]: time="2024-07-02T00:23:20.673667480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b844fcdb7-6g2s9,Uid:ea8e4fad-5503-4870-9674-b12d7e93ef34,Namespace:calico-system,Attempt:0,}" Jul 2 00:23:20.689211 kubelet[3245]: E0702 00:23:20.689175 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.689211 kubelet[3245]: W0702 00:23:20.689202 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.689397 kubelet[3245]: E0702 00:23:20.689227 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.690836 kubelet[3245]: E0702 00:23:20.690801 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.690836 kubelet[3245]: W0702 00:23:20.690820 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.690992 kubelet[3245]: E0702 00:23:20.690853 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.691170 kubelet[3245]: E0702 00:23:20.691152 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.691170 kubelet[3245]: W0702 00:23:20.691170 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.691262 kubelet[3245]: E0702 00:23:20.691201 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.691505 kubelet[3245]: E0702 00:23:20.691487 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.691505 kubelet[3245]: W0702 00:23:20.691503 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.691651 kubelet[3245]: E0702 00:23:20.691531 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.692634 kubelet[3245]: E0702 00:23:20.692611 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.692634 kubelet[3245]: W0702 00:23:20.692631 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.692760 kubelet[3245]: E0702 00:23:20.692722 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.693102 kubelet[3245]: E0702 00:23:20.693080 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.693102 kubelet[3245]: W0702 00:23:20.693099 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.693219 kubelet[3245]: E0702 00:23:20.693190 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.694534 kubelet[3245]: E0702 00:23:20.694512 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.694534 kubelet[3245]: W0702 00:23:20.694531 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.696045 kubelet[3245]: E0702 00:23:20.694620 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.696045 kubelet[3245]: E0702 00:23:20.694796 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.696045 kubelet[3245]: W0702 00:23:20.694806 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.696045 kubelet[3245]: E0702 00:23:20.694890 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.696045 kubelet[3245]: E0702 00:23:20.695047 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.696045 kubelet[3245]: W0702 00:23:20.695056 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.696045 kubelet[3245]: E0702 00:23:20.695141 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.696045 kubelet[3245]: E0702 00:23:20.695297 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.696045 kubelet[3245]: W0702 00:23:20.695306 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.696045 kubelet[3245]: E0702 00:23:20.695393 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.696395 kubelet[3245]: E0702 00:23:20.695571 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.696395 kubelet[3245]: W0702 00:23:20.695581 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.696395 kubelet[3245]: E0702 00:23:20.695665 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.697466 kubelet[3245]: E0702 00:23:20.696561 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.697466 kubelet[3245]: W0702 00:23:20.696576 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.697466 kubelet[3245]: E0702 00:23:20.696603 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.697466 kubelet[3245]: E0702 00:23:20.696817 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.697466 kubelet[3245]: W0702 00:23:20.696828 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.697466 kubelet[3245]: E0702 00:23:20.696915 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.697466 kubelet[3245]: E0702 00:23:20.697091 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.697466 kubelet[3245]: W0702 00:23:20.697100 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.697815 kubelet[3245]: E0702 00:23:20.697665 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.697894 kubelet[3245]: E0702 00:23:20.697877 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.697946 kubelet[3245]: W0702 00:23:20.697896 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.698002 kubelet[3245]: E0702 00:23:20.697989 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.698488 kubelet[3245]: E0702 00:23:20.698469 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.698488 kubelet[3245]: W0702 00:23:20.698486 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.698611 kubelet[3245]: E0702 00:23:20.698573 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.698751 kubelet[3245]: E0702 00:23:20.698734 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.698751 kubelet[3245]: W0702 00:23:20.698750 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.699303 kubelet[3245]: E0702 00:23:20.699280 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.701093 kubelet[3245]: E0702 00:23:20.701071 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.701093 kubelet[3245]: W0702 00:23:20.701091 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.701224 kubelet[3245]: E0702 00:23:20.701179 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.701408 kubelet[3245]: E0702 00:23:20.701396 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.702516 kubelet[3245]: W0702 00:23:20.701410 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.702516 kubelet[3245]: E0702 00:23:20.701457 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.702516 kubelet[3245]: E0702 00:23:20.701747 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.702516 kubelet[3245]: W0702 00:23:20.701758 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.702516 kubelet[3245]: E0702 00:23:20.701846 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.702516 kubelet[3245]: E0702 00:23:20.702008 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.702516 kubelet[3245]: W0702 00:23:20.702017 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.702516 kubelet[3245]: E0702 00:23:20.702104 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.702516 kubelet[3245]: E0702 00:23:20.702268 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.702516 kubelet[3245]: W0702 00:23:20.702277 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.702867 kubelet[3245]: E0702 00:23:20.702393 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.703734 kubelet[3245]: E0702 00:23:20.703715 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.703734 kubelet[3245]: W0702 00:23:20.703732 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.703862 kubelet[3245]: E0702 00:23:20.703765 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.704014 kubelet[3245]: E0702 00:23:20.703988 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.704014 kubelet[3245]: W0702 00:23:20.704004 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.704101 kubelet[3245]: E0702 00:23:20.704031 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.704294 kubelet[3245]: E0702 00:23:20.704278 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.704294 kubelet[3245]: W0702 00:23:20.704294 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.704408 kubelet[3245]: E0702 00:23:20.704310 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.743724 containerd[1714]: time="2024-07-02T00:23:20.740573805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:23:20.743724 containerd[1714]: time="2024-07-02T00:23:20.743567642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:20.743724 containerd[1714]: time="2024-07-02T00:23:20.743593242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:23:20.743724 containerd[1714]: time="2024-07-02T00:23:20.743613443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:20.745485 kubelet[3245]: E0702 00:23:20.744348 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:20.745485 kubelet[3245]: W0702 00:23:20.744372 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:20.745485 kubelet[3245]: E0702 00:23:20.744396 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:20.765161 containerd[1714]: time="2024-07-02T00:23:20.764668002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mkxcg,Uid:b9521cb9-ba88-416e-a7b7-6deacf74b146,Namespace:calico-system,Attempt:0,}" Jul 2 00:23:20.782641 systemd[1]: Started cri-containerd-c64803a3b07266a712d8b8e4268f58541f52d15b424cf48ff12cf4ffdf3e513d.scope - libcontainer container c64803a3b07266a712d8b8e4268f58541f52d15b424cf48ff12cf4ffdf3e513d. Jul 2 00:23:20.853839 containerd[1714]: time="2024-07-02T00:23:20.853779102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b844fcdb7-6g2s9,Uid:ea8e4fad-5503-4870-9674-b12d7e93ef34,Namespace:calico-system,Attempt:0,} returns sandbox id \"c64803a3b07266a712d8b8e4268f58541f52d15b424cf48ff12cf4ffdf3e513d\"" Jul 2 00:23:20.856586 containerd[1714]: time="2024-07-02T00:23:20.856521536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jul 2 00:23:21.323198 containerd[1714]: time="2024-07-02T00:23:21.322855311Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:23:21.323198 containerd[1714]: time="2024-07-02T00:23:21.322931012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:21.323198 containerd[1714]: time="2024-07-02T00:23:21.323014714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:23:21.323198 containerd[1714]: time="2024-07-02T00:23:21.323046214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:21.344662 systemd[1]: Started cri-containerd-d6f518b5155d15a2211cb42ad2723a40b76ff00c484302866a27691795237e27.scope - libcontainer container d6f518b5155d15a2211cb42ad2723a40b76ff00c484302866a27691795237e27. Jul 2 00:23:21.389838 containerd[1714]: time="2024-07-02T00:23:21.389767770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mkxcg,Uid:b9521cb9-ba88-416e-a7b7-6deacf74b146,Namespace:calico-system,Attempt:0,} returns sandbox id \"d6f518b5155d15a2211cb42ad2723a40b76ff00c484302866a27691795237e27\"" Jul 2 00:23:22.361329 kubelet[3245]: E0702 00:23:22.359714 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46k9p" podUID="d83f546f-f0c6-4f7a-a190-1895a85550b7" Jul 2 00:23:23.872641 containerd[1714]: time="2024-07-02T00:23:23.872595869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:23.876124 containerd[1714]: time="2024-07-02T00:23:23.876070119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jul 2 00:23:23.879972 containerd[1714]: time="2024-07-02T00:23:23.879905173Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:23.886377 containerd[1714]: time="2024-07-02T00:23:23.886272864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:23.887474 containerd[1714]: time="2024-07-02T00:23:23.887299879Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 3.030706443s" Jul 2 00:23:23.887474 containerd[1714]: time="2024-07-02T00:23:23.887337979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jul 2 00:23:23.890449 containerd[1714]: time="2024-07-02T00:23:23.889699913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 00:23:23.908887 containerd[1714]: time="2024-07-02T00:23:23.908837386Z" level=info msg="CreateContainer within sandbox \"c64803a3b07266a712d8b8e4268f58541f52d15b424cf48ff12cf4ffdf3e513d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 00:23:23.952865 containerd[1714]: time="2024-07-02T00:23:23.952819414Z" level=info msg="CreateContainer within sandbox \"c64803a3b07266a712d8b8e4268f58541f52d15b424cf48ff12cf4ffdf3e513d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"38e74ea224b615f34ed756b838f8545632851fbec7a1092ea7f00ea0b0ce7ef7\"" Jul 2 00:23:23.953826 containerd[1714]: time="2024-07-02T00:23:23.953484324Z" level=info msg="StartContainer for \"38e74ea224b615f34ed756b838f8545632851fbec7a1092ea7f00ea0b0ce7ef7\"" Jul 2 00:23:23.988599 systemd[1]: Started cri-containerd-38e74ea224b615f34ed756b838f8545632851fbec7a1092ea7f00ea0b0ce7ef7.scope - libcontainer container 38e74ea224b615f34ed756b838f8545632851fbec7a1092ea7f00ea0b0ce7ef7. Jul 2 00:23:24.033020 containerd[1714]: time="2024-07-02T00:23:24.032900057Z" level=info msg="StartContainer for \"38e74ea224b615f34ed756b838f8545632851fbec7a1092ea7f00ea0b0ce7ef7\" returns successfully" Jul 2 00:23:24.360927 kubelet[3245]: E0702 00:23:24.359373 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46k9p" podUID="d83f546f-f0c6-4f7a-a190-1895a85550b7" Jul 2 00:23:24.475171 kubelet[3245]: I0702 00:23:24.475076 3245 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-b844fcdb7-6g2s9" podStartSLOduration=1.443186911 podStartE2EDuration="4.475027268s" podCreationTimestamp="2024-07-02 00:23:20 +0000 UTC" firstStartedPulling="2024-07-02 00:23:20.855978129 +0000 UTC m=+20.920598419" lastFinishedPulling="2024-07-02 00:23:23.887818486 +0000 UTC m=+23.952438776" observedRunningTime="2024-07-02 00:23:24.470521604 +0000 UTC m=+24.535141894" watchObservedRunningTime="2024-07-02 00:23:24.475027268 +0000 UTC m=+24.539647558" Jul 2 00:23:24.506583 kubelet[3245]: E0702 00:23:24.506547 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.506583 kubelet[3245]: W0702 00:23:24.506579 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.506795 kubelet[3245]: E0702 00:23:24.506604 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.508533 kubelet[3245]: E0702 00:23:24.508501 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.508533 kubelet[3245]: W0702 00:23:24.508521 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.508706 kubelet[3245]: E0702 00:23:24.508556 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.508788 kubelet[3245]: E0702 00:23:24.508774 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.508856 kubelet[3245]: W0702 00:23:24.508789 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.508856 kubelet[3245]: E0702 00:23:24.508805 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.509036 kubelet[3245]: E0702 00:23:24.509020 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.509106 kubelet[3245]: W0702 00:23:24.509037 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.509106 kubelet[3245]: E0702 00:23:24.509053 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.509270 kubelet[3245]: E0702 00:23:24.509257 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.509334 kubelet[3245]: W0702 00:23:24.509271 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.509334 kubelet[3245]: E0702 00:23:24.509287 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.509515 kubelet[3245]: E0702 00:23:24.509501 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.509515 kubelet[3245]: W0702 00:23:24.509516 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.509652 kubelet[3245]: E0702 00:23:24.509532 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.509773 kubelet[3245]: E0702 00:23:24.509745 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.509773 kubelet[3245]: W0702 00:23:24.509760 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.509901 kubelet[3245]: E0702 00:23:24.509776 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.510016 kubelet[3245]: E0702 00:23:24.509987 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.510016 kubelet[3245]: W0702 00:23:24.510001 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.510016 kubelet[3245]: E0702 00:23:24.510017 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.510237 kubelet[3245]: E0702 00:23:24.510214 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.510237 kubelet[3245]: W0702 00:23:24.510228 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.510338 kubelet[3245]: E0702 00:23:24.510252 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.510682 kubelet[3245]: E0702 00:23:24.510474 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.510682 kubelet[3245]: W0702 00:23:24.510489 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.510682 kubelet[3245]: E0702 00:23:24.510504 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.510820 kubelet[3245]: E0702 00:23:24.510717 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.510820 kubelet[3245]: W0702 00:23:24.510727 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.510820 kubelet[3245]: E0702 00:23:24.510742 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.522422 kubelet[3245]: E0702 00:23:24.522399 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.522422 kubelet[3245]: W0702 00:23:24.522421 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.522558 kubelet[3245]: E0702 00:23:24.522456 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.522684 kubelet[3245]: E0702 00:23:24.522667 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.522739 kubelet[3245]: W0702 00:23:24.522684 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.522739 kubelet[3245]: E0702 00:23:24.522729 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.522952 kubelet[3245]: E0702 00:23:24.522936 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.523013 kubelet[3245]: W0702 00:23:24.522959 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.523013 kubelet[3245]: E0702 00:23:24.522975 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.523187 kubelet[3245]: E0702 00:23:24.523170 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.523187 kubelet[3245]: W0702 00:23:24.523187 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.523323 kubelet[3245]: E0702 00:23:24.523203 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.523596 kubelet[3245]: E0702 00:23:24.523571 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.523665 kubelet[3245]: W0702 00:23:24.523608 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.523665 kubelet[3245]: E0702 00:23:24.523628 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.523868 kubelet[3245]: E0702 00:23:24.523851 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.523868 kubelet[3245]: W0702 00:23:24.523868 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.523965 kubelet[3245]: E0702 00:23:24.523884 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.524533 kubelet[3245]: E0702 00:23:24.524512 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.524533 kubelet[3245]: W0702 00:23:24.524530 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.524647 kubelet[3245]: E0702 00:23:24.524547 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.526532 kubelet[3245]: E0702 00:23:24.526510 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.526608 kubelet[3245]: W0702 00:23:24.526551 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.526608 kubelet[3245]: E0702 00:23:24.526569 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.526784 kubelet[3245]: E0702 00:23:24.526768 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.526784 kubelet[3245]: W0702 00:23:24.526784 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.526878 kubelet[3245]: E0702 00:23:24.526800 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.527001 kubelet[3245]: E0702 00:23:24.526985 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.527001 kubelet[3245]: W0702 00:23:24.527001 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.527095 kubelet[3245]: E0702 00:23:24.527029 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.527256 kubelet[3245]: E0702 00:23:24.527238 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.527307 kubelet[3245]: W0702 00:23:24.527256 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.527307 kubelet[3245]: E0702 00:23:24.527279 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.529544 kubelet[3245]: E0702 00:23:24.528607 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.529544 kubelet[3245]: W0702 00:23:24.528622 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.529544 kubelet[3245]: E0702 00:23:24.528645 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.529720 kubelet[3245]: E0702 00:23:24.529585 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.529720 kubelet[3245]: W0702 00:23:24.529598 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.529720 kubelet[3245]: E0702 00:23:24.529614 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.529852 kubelet[3245]: E0702 00:23:24.529813 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.529852 kubelet[3245]: W0702 00:23:24.529824 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.529852 kubelet[3245]: E0702 00:23:24.529839 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.530616 kubelet[3245]: E0702 00:23:24.530594 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.530616 kubelet[3245]: W0702 00:23:24.530613 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.530738 kubelet[3245]: E0702 00:23:24.530630 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.531598 kubelet[3245]: E0702 00:23:24.531577 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.531598 kubelet[3245]: W0702 00:23:24.531596 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.531751 kubelet[3245]: E0702 00:23:24.531621 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.533603 kubelet[3245]: E0702 00:23:24.533582 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.533679 kubelet[3245]: W0702 00:23:24.533601 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.533679 kubelet[3245]: E0702 00:23:24.533624 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.534149 kubelet[3245]: E0702 00:23:24.534118 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.534149 kubelet[3245]: W0702 00:23:24.534146 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.534267 kubelet[3245]: E0702 00:23:24.534169 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.534781 kubelet[3245]: E0702 00:23:24.534661 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.534781 kubelet[3245]: W0702 00:23:24.534677 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.534781 kubelet[3245]: E0702 00:23:24.534694 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.536595 kubelet[3245]: E0702 00:23:24.536576 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.536595 kubelet[3245]: W0702 00:23:24.536593 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.536720 kubelet[3245]: E0702 00:23:24.536609 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.536850 kubelet[3245]: E0702 00:23:24.536834 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.536904 kubelet[3245]: W0702 00:23:24.536850 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.536904 kubelet[3245]: E0702 00:23:24.536866 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:24.537260 kubelet[3245]: E0702 00:23:24.537243 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:24.537260 kubelet[3245]: W0702 00:23:24.537259 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:24.537372 kubelet[3245]: E0702 00:23:24.537274 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.449491 kubelet[3245]: I0702 00:23:25.449275 3245 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:23:25.521857 containerd[1714]: time="2024-07-02T00:23:25.521803211Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:25.524655 containerd[1714]: time="2024-07-02T00:23:25.524582151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jul 2 00:23:25.528813 containerd[1714]: time="2024-07-02T00:23:25.528764310Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:25.529721 kubelet[3245]: E0702 00:23:25.529640 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.529721 kubelet[3245]: W0702 00:23:25.529685 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.529721 kubelet[3245]: E0702 00:23:25.529712 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.532113 kubelet[3245]: E0702 00:23:25.530329 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.532113 kubelet[3245]: W0702 00:23:25.530348 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.532113 kubelet[3245]: E0702 00:23:25.530368 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.532113 kubelet[3245]: E0702 00:23:25.530606 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.532113 kubelet[3245]: W0702 00:23:25.530618 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.532113 kubelet[3245]: E0702 00:23:25.530635 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.532113 kubelet[3245]: E0702 00:23:25.530843 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.532113 kubelet[3245]: W0702 00:23:25.530855 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.532113 kubelet[3245]: E0702 00:23:25.530906 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.532113 kubelet[3245]: E0702 00:23:25.531151 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.532711 kubelet[3245]: W0702 00:23:25.531163 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.532711 kubelet[3245]: E0702 00:23:25.531177 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.532711 kubelet[3245]: E0702 00:23:25.531368 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.532711 kubelet[3245]: W0702 00:23:25.531378 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.532711 kubelet[3245]: E0702 00:23:25.531394 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.532711 kubelet[3245]: E0702 00:23:25.531607 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.532711 kubelet[3245]: W0702 00:23:25.531617 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.532711 kubelet[3245]: E0702 00:23:25.531634 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.532711 kubelet[3245]: E0702 00:23:25.531825 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.532711 kubelet[3245]: W0702 00:23:25.531837 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.533116 kubelet[3245]: E0702 00:23:25.531852 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.533116 kubelet[3245]: E0702 00:23:25.532081 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.533116 kubelet[3245]: W0702 00:23:25.532095 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.533116 kubelet[3245]: E0702 00:23:25.532110 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.533116 kubelet[3245]: E0702 00:23:25.532302 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.533116 kubelet[3245]: W0702 00:23:25.532349 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.533116 kubelet[3245]: E0702 00:23:25.532367 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.533116 kubelet[3245]: E0702 00:23:25.532618 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.533116 kubelet[3245]: W0702 00:23:25.532629 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.533116 kubelet[3245]: E0702 00:23:25.532645 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.533574 kubelet[3245]: E0702 00:23:25.532831 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.533574 kubelet[3245]: W0702 00:23:25.532841 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.533574 kubelet[3245]: E0702 00:23:25.532855 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.533574 kubelet[3245]: E0702 00:23:25.533118 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.533574 kubelet[3245]: W0702 00:23:25.533128 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.533574 kubelet[3245]: E0702 00:23:25.533147 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.533574 kubelet[3245]: E0702 00:23:25.533323 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.533574 kubelet[3245]: W0702 00:23:25.533333 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.533574 kubelet[3245]: E0702 00:23:25.533347 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.533574 kubelet[3245]: E0702 00:23:25.533548 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.533961 kubelet[3245]: W0702 00:23:25.533558 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.533961 kubelet[3245]: E0702 00:23:25.533572 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.533961 kubelet[3245]: E0702 00:23:25.533826 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.533961 kubelet[3245]: W0702 00:23:25.533836 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.533961 kubelet[3245]: E0702 00:23:25.533850 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.537224 kubelet[3245]: E0702 00:23:25.534105 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.537224 kubelet[3245]: W0702 00:23:25.534117 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.537224 kubelet[3245]: E0702 00:23:25.534135 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.537224 kubelet[3245]: E0702 00:23:25.534393 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.537224 kubelet[3245]: W0702 00:23:25.534405 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.537224 kubelet[3245]: E0702 00:23:25.534509 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.537224 kubelet[3245]: E0702 00:23:25.535226 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.537224 kubelet[3245]: W0702 00:23:25.535241 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.537224 kubelet[3245]: E0702 00:23:25.535274 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.537224 kubelet[3245]: E0702 00:23:25.535519 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.537660 kubelet[3245]: W0702 00:23:25.535536 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.537660 kubelet[3245]: E0702 00:23:25.535650 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.537660 kubelet[3245]: E0702 00:23:25.535776 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.537660 kubelet[3245]: W0702 00:23:25.535786 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.537660 kubelet[3245]: E0702 00:23:25.535889 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.537660 kubelet[3245]: E0702 00:23:25.536023 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.537660 kubelet[3245]: W0702 00:23:25.536032 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.537660 kubelet[3245]: E0702 00:23:25.536119 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.537660 kubelet[3245]: E0702 00:23:25.536281 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.537660 kubelet[3245]: W0702 00:23:25.536291 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.538090 kubelet[3245]: E0702 00:23:25.536335 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.538090 kubelet[3245]: E0702 00:23:25.536704 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.538090 kubelet[3245]: W0702 00:23:25.536716 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.538090 kubelet[3245]: E0702 00:23:25.536750 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.538090 kubelet[3245]: E0702 00:23:25.537038 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.538090 kubelet[3245]: W0702 00:23:25.537049 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.538090 kubelet[3245]: E0702 00:23:25.537081 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.538090 kubelet[3245]: E0702 00:23:25.537343 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.538090 kubelet[3245]: W0702 00:23:25.537354 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.538090 kubelet[3245]: E0702 00:23:25.537383 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.539241 kubelet[3245]: E0702 00:23:25.537803 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.539241 kubelet[3245]: W0702 00:23:25.537814 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.539241 kubelet[3245]: E0702 00:23:25.537835 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.539241 kubelet[3245]: E0702 00:23:25.538039 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.539241 kubelet[3245]: W0702 00:23:25.538048 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.539241 kubelet[3245]: E0702 00:23:25.538138 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.539241 kubelet[3245]: E0702 00:23:25.538321 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.539241 kubelet[3245]: W0702 00:23:25.538331 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.539241 kubelet[3245]: E0702 00:23:25.538496 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.539241 kubelet[3245]: E0702 00:23:25.539053 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.539709 kubelet[3245]: W0702 00:23:25.539065 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.539709 kubelet[3245]: E0702 00:23:25.539085 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.539709 kubelet[3245]: E0702 00:23:25.539310 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.539709 kubelet[3245]: W0702 00:23:25.539320 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.539709 kubelet[3245]: E0702 00:23:25.539426 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.539902 kubelet[3245]: E0702 00:23:25.539849 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.539902 kubelet[3245]: W0702 00:23:25.539860 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.539902 kubelet[3245]: E0702 00:23:25.539891 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.540208 kubelet[3245]: E0702 00:23:25.540137 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:25.540208 kubelet[3245]: W0702 00:23:25.540151 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:25.540208 kubelet[3245]: E0702 00:23:25.540168 3245 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:25.540846 containerd[1714]: time="2024-07-02T00:23:25.540717181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:25.542051 containerd[1714]: time="2024-07-02T00:23:25.541917298Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.652169484s" Jul 2 00:23:25.542051 containerd[1714]: time="2024-07-02T00:23:25.541957499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jul 2 00:23:25.544824 containerd[1714]: time="2024-07-02T00:23:25.544774739Z" level=info msg="CreateContainer within sandbox \"d6f518b5155d15a2211cb42ad2723a40b76ff00c484302866a27691795237e27\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 00:23:25.585681 containerd[1714]: time="2024-07-02T00:23:25.585637822Z" level=info msg="CreateContainer within sandbox \"d6f518b5155d15a2211cb42ad2723a40b76ff00c484302866a27691795237e27\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"456eb7e87f61bc0bd6a490892f1edd7d264028c7809fef5cf89eb868f88019d6\"" Jul 2 00:23:25.586551 containerd[1714]: time="2024-07-02T00:23:25.586276831Z" level=info msg="StartContainer for \"456eb7e87f61bc0bd6a490892f1edd7d264028c7809fef5cf89eb868f88019d6\"" Jul 2 00:23:25.619607 systemd[1]: Started cri-containerd-456eb7e87f61bc0bd6a490892f1edd7d264028c7809fef5cf89eb868f88019d6.scope - libcontainer container 456eb7e87f61bc0bd6a490892f1edd7d264028c7809fef5cf89eb868f88019d6. Jul 2 00:23:25.652491 containerd[1714]: time="2024-07-02T00:23:25.652015070Z" level=info msg="StartContainer for \"456eb7e87f61bc0bd6a490892f1edd7d264028c7809fef5cf89eb868f88019d6\" returns successfully" Jul 2 00:23:25.661323 systemd[1]: cri-containerd-456eb7e87f61bc0bd6a490892f1edd7d264028c7809fef5cf89eb868f88019d6.scope: Deactivated successfully. Jul 2 00:23:25.684962 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-456eb7e87f61bc0bd6a490892f1edd7d264028c7809fef5cf89eb868f88019d6-rootfs.mount: Deactivated successfully. Jul 2 00:23:26.360971 kubelet[3245]: E0702 00:23:26.359397 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46k9p" podUID="d83f546f-f0c6-4f7a-a190-1895a85550b7" Jul 2 00:23:26.923727 containerd[1714]: time="2024-07-02T00:23:26.923654622Z" level=info msg="shim disconnected" id=456eb7e87f61bc0bd6a490892f1edd7d264028c7809fef5cf89eb868f88019d6 namespace=k8s.io Jul 2 00:23:26.923727 containerd[1714]: time="2024-07-02T00:23:26.923730923Z" level=warning msg="cleaning up after shim disconnected" id=456eb7e87f61bc0bd6a490892f1edd7d264028c7809fef5cf89eb868f88019d6 namespace=k8s.io Jul 2 00:23:26.924390 containerd[1714]: time="2024-07-02T00:23:26.923743223Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:23:27.456972 containerd[1714]: time="2024-07-02T00:23:27.456912034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 00:23:28.361791 kubelet[3245]: E0702 00:23:28.361709 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46k9p" podUID="d83f546f-f0c6-4f7a-a190-1895a85550b7" Jul 2 00:23:30.359911 kubelet[3245]: E0702 00:23:30.359471 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46k9p" podUID="d83f546f-f0c6-4f7a-a190-1895a85550b7" Jul 2 00:23:32.360457 kubelet[3245]: E0702 00:23:32.358925 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46k9p" podUID="d83f546f-f0c6-4f7a-a190-1895a85550b7" Jul 2 00:23:34.361374 kubelet[3245]: E0702 00:23:34.359534 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46k9p" podUID="d83f546f-f0c6-4f7a-a190-1895a85550b7" Jul 2 00:23:36.360031 kubelet[3245]: E0702 00:23:36.359690 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46k9p" podUID="d83f546f-f0c6-4f7a-a190-1895a85550b7" Jul 2 00:23:38.361343 kubelet[3245]: E0702 00:23:38.359560 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46k9p" podUID="d83f546f-f0c6-4f7a-a190-1895a85550b7" Jul 2 00:23:40.360119 kubelet[3245]: E0702 00:23:40.359703 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46k9p" podUID="d83f546f-f0c6-4f7a-a190-1895a85550b7" Jul 2 00:23:42.360259 kubelet[3245]: E0702 00:23:42.359778 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46k9p" podUID="d83f546f-f0c6-4f7a-a190-1895a85550b7" Jul 2 00:23:42.742739 kubelet[3245]: I0702 00:23:42.742318 3245 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:23:44.361461 kubelet[3245]: E0702 00:23:44.359682 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46k9p" podUID="d83f546f-f0c6-4f7a-a190-1895a85550b7" Jul 2 00:23:46.360469 kubelet[3245]: E0702 00:23:46.359255 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46k9p" podUID="d83f546f-f0c6-4f7a-a190-1895a85550b7" Jul 2 00:23:48.361353 kubelet[3245]: E0702 00:23:48.359373 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46k9p" podUID="d83f546f-f0c6-4f7a-a190-1895a85550b7" Jul 2 00:23:50.359474 kubelet[3245]: E0702 00:23:50.359407 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46k9p" podUID="d83f546f-f0c6-4f7a-a190-1895a85550b7" Jul 2 00:23:52.360012 kubelet[3245]: E0702 00:23:52.359578 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46k9p" podUID="d83f546f-f0c6-4f7a-a190-1895a85550b7" Jul 2 00:23:54.359489 kubelet[3245]: E0702 00:23:54.359447 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46k9p" podUID="d83f546f-f0c6-4f7a-a190-1895a85550b7" Jul 2 00:23:54.397886 containerd[1714]: time="2024-07-02T00:23:54.397825143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:54.399902 containerd[1714]: time="2024-07-02T00:23:54.399852470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jul 2 00:23:54.403865 containerd[1714]: time="2024-07-02T00:23:54.403766323Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:54.408720 containerd[1714]: time="2024-07-02T00:23:54.408640789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:54.409575 containerd[1714]: time="2024-07-02T00:23:54.409461400Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 26.952476164s" Jul 2 00:23:54.409575 containerd[1714]: time="2024-07-02T00:23:54.409495600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jul 2 00:23:54.411411 containerd[1714]: time="2024-07-02T00:23:54.411381426Z" level=info msg="CreateContainer within sandbox \"d6f518b5155d15a2211cb42ad2723a40b76ff00c484302866a27691795237e27\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 00:23:54.455999 containerd[1714]: time="2024-07-02T00:23:54.455954726Z" level=info msg="CreateContainer within sandbox \"d6f518b5155d15a2211cb42ad2723a40b76ff00c484302866a27691795237e27\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9836b30196fb17e9dc9d0321b7b401d6f7d3d7ae94319949b83c6be0a4ea1f88\"" Jul 2 00:23:54.456737 containerd[1714]: time="2024-07-02T00:23:54.456421433Z" level=info msg="StartContainer for \"9836b30196fb17e9dc9d0321b7b401d6f7d3d7ae94319949b83c6be0a4ea1f88\"" Jul 2 00:23:54.487384 systemd[1]: run-containerd-runc-k8s.io-9836b30196fb17e9dc9d0321b7b401d6f7d3d7ae94319949b83c6be0a4ea1f88-runc.KCk6Gi.mount: Deactivated successfully. Jul 2 00:23:54.496573 systemd[1]: Started cri-containerd-9836b30196fb17e9dc9d0321b7b401d6f7d3d7ae94319949b83c6be0a4ea1f88.scope - libcontainer container 9836b30196fb17e9dc9d0321b7b401d6f7d3d7ae94319949b83c6be0a4ea1f88. Jul 2 00:23:54.528775 containerd[1714]: time="2024-07-02T00:23:54.528116908Z" level=info msg="StartContainer for \"9836b30196fb17e9dc9d0321b7b401d6f7d3d7ae94319949b83c6be0a4ea1f88\" returns successfully" Jul 2 00:23:55.893262 containerd[1714]: time="2024-07-02T00:23:55.893204869Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:23:55.896444 systemd[1]: cri-containerd-9836b30196fb17e9dc9d0321b7b401d6f7d3d7ae94319949b83c6be0a4ea1f88.scope: Deactivated successfully. Jul 2 00:23:55.920819 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9836b30196fb17e9dc9d0321b7b401d6f7d3d7ae94319949b83c6be0a4ea1f88-rootfs.mount: Deactivated successfully. Jul 2 00:23:55.961762 kubelet[3245]: I0702 00:23:55.960800 3245 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 00:23:56.410729 kubelet[3245]: I0702 00:23:55.979243 3245 topology_manager.go:215] "Topology Admit Handler" podUID="fdca042f-71be-426a-9e20-51906df4946a" podNamespace="kube-system" podName="coredns-76f75df574-24jmp" Jul 2 00:23:56.410729 kubelet[3245]: I0702 00:23:55.982568 3245 topology_manager.go:215] "Topology Admit Handler" podUID="af167feb-cfad-401b-a59b-ba2e1b1a7b28" podNamespace="kube-system" podName="coredns-76f75df574-zcrs9" Jul 2 00:23:56.410729 kubelet[3245]: I0702 00:23:55.983754 3245 topology_manager.go:215] "Topology Admit Handler" podUID="08a53d11-75a7-4cca-8c69-c90223c69562" podNamespace="calico-system" podName="calico-kube-controllers-7967f9f766-kd24n" Jul 2 00:23:56.410729 kubelet[3245]: I0702 00:23:56.048921 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08a53d11-75a7-4cca-8c69-c90223c69562-tigera-ca-bundle\") pod \"calico-kube-controllers-7967f9f766-kd24n\" (UID: \"08a53d11-75a7-4cca-8c69-c90223c69562\") " pod="calico-system/calico-kube-controllers-7967f9f766-kd24n" Jul 2 00:23:56.410729 kubelet[3245]: I0702 00:23:56.049006 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zvpl\" (UniqueName: \"kubernetes.io/projected/af167feb-cfad-401b-a59b-ba2e1b1a7b28-kube-api-access-4zvpl\") pod \"coredns-76f75df574-zcrs9\" (UID: \"af167feb-cfad-401b-a59b-ba2e1b1a7b28\") " pod="kube-system/coredns-76f75df574-zcrs9" Jul 2 00:23:56.410729 kubelet[3245]: I0702 00:23:56.049041 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5xwj\" (UniqueName: \"kubernetes.io/projected/08a53d11-75a7-4cca-8c69-c90223c69562-kube-api-access-k5xwj\") pod \"calico-kube-controllers-7967f9f766-kd24n\" (UID: \"08a53d11-75a7-4cca-8c69-c90223c69562\") " pod="calico-system/calico-kube-controllers-7967f9f766-kd24n" Jul 2 00:23:55.994410 systemd[1]: Created slice kubepods-burstable-podfdca042f_71be_426a_9e20_51906df4946a.slice - libcontainer container kubepods-burstable-podfdca042f_71be_426a_9e20_51906df4946a.slice. Jul 2 00:23:56.411198 kubelet[3245]: I0702 00:23:56.049087 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af167feb-cfad-401b-a59b-ba2e1b1a7b28-config-volume\") pod \"coredns-76f75df574-zcrs9\" (UID: \"af167feb-cfad-401b-a59b-ba2e1b1a7b28\") " pod="kube-system/coredns-76f75df574-zcrs9" Jul 2 00:23:56.411198 kubelet[3245]: I0702 00:23:56.049115 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd8nk\" (UniqueName: \"kubernetes.io/projected/fdca042f-71be-426a-9e20-51906df4946a-kube-api-access-bd8nk\") pod \"coredns-76f75df574-24jmp\" (UID: \"fdca042f-71be-426a-9e20-51906df4946a\") " pod="kube-system/coredns-76f75df574-24jmp" Jul 2 00:23:56.411198 kubelet[3245]: I0702 00:23:56.049172 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdca042f-71be-426a-9e20-51906df4946a-config-volume\") pod \"coredns-76f75df574-24jmp\" (UID: \"fdca042f-71be-426a-9e20-51906df4946a\") " pod="kube-system/coredns-76f75df574-24jmp" Jul 2 00:23:56.002606 systemd[1]: Created slice kubepods-burstable-podaf167feb_cfad_401b_a59b_ba2e1b1a7b28.slice - libcontainer container kubepods-burstable-podaf167feb_cfad_401b_a59b_ba2e1b1a7b28.slice. Jul 2 00:23:56.007530 systemd[1]: Created slice kubepods-besteffort-pod08a53d11_75a7_4cca_8c69_c90223c69562.slice - libcontainer container kubepods-besteffort-pod08a53d11_75a7_4cca_8c69_c90223c69562.slice. Jul 2 00:23:56.365759 systemd[1]: Created slice kubepods-besteffort-podd83f546f_f0c6_4f7a_a190_1895a85550b7.slice - libcontainer container kubepods-besteffort-podd83f546f_f0c6_4f7a_a190_1895a85550b7.slice. Jul 2 00:23:56.416127 containerd[1714]: time="2024-07-02T00:23:56.415895444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-46k9p,Uid:d83f546f-f0c6-4f7a-a190-1895a85550b7,Namespace:calico-system,Attempt:0,}" Jul 2 00:23:56.716322 containerd[1714]: time="2024-07-02T00:23:56.716143580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-24jmp,Uid:fdca042f-71be-426a-9e20-51906df4946a,Namespace:kube-system,Attempt:0,}" Jul 2 00:23:56.717845 containerd[1714]: time="2024-07-02T00:23:56.717724202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zcrs9,Uid:af167feb-cfad-401b-a59b-ba2e1b1a7b28,Namespace:kube-system,Attempt:0,}" Jul 2 00:23:56.722398 containerd[1714]: time="2024-07-02T00:23:56.722352968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7967f9f766-kd24n,Uid:08a53d11-75a7-4cca-8c69-c90223c69562,Namespace:calico-system,Attempt:0,}" Jul 2 00:23:57.531942 containerd[1714]: time="2024-07-02T00:23:57.531871090Z" level=info msg="shim disconnected" id=9836b30196fb17e9dc9d0321b7b401d6f7d3d7ae94319949b83c6be0a4ea1f88 namespace=k8s.io Jul 2 00:23:57.531942 containerd[1714]: time="2024-07-02T00:23:57.531932490Z" level=warning msg="cleaning up after shim disconnected" id=9836b30196fb17e9dc9d0321b7b401d6f7d3d7ae94319949b83c6be0a4ea1f88 namespace=k8s.io Jul 2 00:23:57.531942 containerd[1714]: time="2024-07-02T00:23:57.531944391Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:23:57.767493 containerd[1714]: time="2024-07-02T00:23:57.766596401Z" level=error msg="Failed to destroy network for sandbox \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.770861 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574-shm.mount: Deactivated successfully. Jul 2 00:23:57.775300 kubelet[3245]: E0702 00:23:57.772656 3245 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.775300 kubelet[3245]: E0702 00:23:57.772740 3245 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-24jmp" Jul 2 00:23:57.775300 kubelet[3245]: E0702 00:23:57.772783 3245 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-24jmp" Jul 2 00:23:57.776461 containerd[1714]: time="2024-07-02T00:23:57.771126765Z" level=error msg="encountered an error cleaning up failed sandbox \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.776461 containerd[1714]: time="2024-07-02T00:23:57.771197066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-24jmp,Uid:fdca042f-71be-426a-9e20-51906df4946a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.776565 kubelet[3245]: E0702 00:23:57.772860 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-24jmp_kube-system(fdca042f-71be-426a-9e20-51906df4946a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-24jmp_kube-system(fdca042f-71be-426a-9e20-51906df4946a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-24jmp" podUID="fdca042f-71be-426a-9e20-51906df4946a" Jul 2 00:23:57.794165 containerd[1714]: time="2024-07-02T00:23:57.792693370Z" level=error msg="Failed to destroy network for sandbox \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.794830 containerd[1714]: time="2024-07-02T00:23:57.794656297Z" level=error msg="encountered an error cleaning up failed sandbox \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.795008 containerd[1714]: time="2024-07-02T00:23:57.794978702Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7967f9f766-kd24n,Uid:08a53d11-75a7-4cca-8c69-c90223c69562,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.796490 kubelet[3245]: E0702 00:23:57.795366 3245 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.796490 kubelet[3245]: E0702 00:23:57.795489 3245 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7967f9f766-kd24n" Jul 2 00:23:57.796490 kubelet[3245]: E0702 00:23:57.795520 3245 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7967f9f766-kd24n" Jul 2 00:23:57.796682 containerd[1714]: time="2024-07-02T00:23:57.796145018Z" level=error msg="Failed to destroy network for sandbox \"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.796682 containerd[1714]: time="2024-07-02T00:23:57.796567524Z" level=error msg="encountered an error cleaning up failed sandbox \"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.796682 containerd[1714]: time="2024-07-02T00:23:57.796619125Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zcrs9,Uid:af167feb-cfad-401b-a59b-ba2e1b1a7b28,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.796813 kubelet[3245]: E0702 00:23:57.795781 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7967f9f766-kd24n_calico-system(08a53d11-75a7-4cca-8c69-c90223c69562)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7967f9f766-kd24n_calico-system(08a53d11-75a7-4cca-8c69-c90223c69562)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7967f9f766-kd24n" podUID="08a53d11-75a7-4cca-8c69-c90223c69562" Jul 2 00:23:57.796813 kubelet[3245]: E0702 00:23:57.796807 3245 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.796937 kubelet[3245]: E0702 00:23:57.796850 3245 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-zcrs9" Jul 2 00:23:57.796937 kubelet[3245]: E0702 00:23:57.796875 3245 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-zcrs9" Jul 2 00:23:57.796937 kubelet[3245]: E0702 00:23:57.796927 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-zcrs9_kube-system(af167feb-cfad-401b-a59b-ba2e1b1a7b28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-zcrs9_kube-system(af167feb-cfad-401b-a59b-ba2e1b1a7b28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-zcrs9" podUID="af167feb-cfad-401b-a59b-ba2e1b1a7b28" Jul 2 00:23:57.800277 containerd[1714]: time="2024-07-02T00:23:57.800246576Z" level=error msg="Failed to destroy network for sandbox \"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.800670 containerd[1714]: time="2024-07-02T00:23:57.800633282Z" level=error msg="encountered an error cleaning up failed sandbox \"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.800804 containerd[1714]: time="2024-07-02T00:23:57.800687182Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-46k9p,Uid:d83f546f-f0c6-4f7a-a190-1895a85550b7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.800983 kubelet[3245]: E0702 00:23:57.800965 3245 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.801072 kubelet[3245]: E0702 00:23:57.801033 3245 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-46k9p" Jul 2 00:23:57.801072 kubelet[3245]: E0702 00:23:57.801062 3245 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-46k9p" Jul 2 00:23:57.801161 kubelet[3245]: E0702 00:23:57.801132 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-46k9p_calico-system(d83f546f-f0c6-4f7a-a190-1895a85550b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-46k9p_calico-system(d83f546f-f0c6-4f7a-a190-1895a85550b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-46k9p" podUID="d83f546f-f0c6-4f7a-a190-1895a85550b7" Jul 2 00:23:58.519789 containerd[1714]: time="2024-07-02T00:23:58.519309422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 00:23:58.519967 kubelet[3245]: I0702 00:23:58.519830 3245 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" Jul 2 00:23:58.520804 containerd[1714]: time="2024-07-02T00:23:58.520570540Z" level=info msg="StopPodSandbox for \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\"" Jul 2 00:23:58.521073 containerd[1714]: time="2024-07-02T00:23:58.520949145Z" level=info msg="Ensure that sandbox 834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee in task-service has been cleanup successfully" Jul 2 00:23:58.522312 kubelet[3245]: I0702 00:23:58.522158 3245 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" Jul 2 00:23:58.523324 containerd[1714]: time="2024-07-02T00:23:58.523243477Z" level=info msg="StopPodSandbox for \"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\"" Jul 2 00:23:58.523846 containerd[1714]: time="2024-07-02T00:23:58.523814385Z" level=info msg="Ensure that sandbox 0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d in task-service has been cleanup successfully" Jul 2 00:23:58.531261 kubelet[3245]: I0702 00:23:58.531186 3245 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" Jul 2 00:23:58.532943 containerd[1714]: time="2024-07-02T00:23:58.531906799Z" level=info msg="StopPodSandbox for \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\"" Jul 2 00:23:58.532943 containerd[1714]: time="2024-07-02T00:23:58.532101102Z" level=info msg="Ensure that sandbox 52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574 in task-service has been cleanup successfully" Jul 2 00:23:58.540321 kubelet[3245]: I0702 00:23:58.540300 3245 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" Jul 2 00:23:58.543240 containerd[1714]: time="2024-07-02T00:23:58.542557550Z" level=info msg="StopPodSandbox for \"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\"" Jul 2 00:23:58.543240 containerd[1714]: time="2024-07-02T00:23:58.542885454Z" level=info msg="Ensure that sandbox 65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c in task-service has been cleanup successfully" Jul 2 00:23:58.617770 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee-shm.mount: Deactivated successfully. Jul 2 00:23:58.617907 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d-shm.mount: Deactivated successfully. Jul 2 00:23:58.617987 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c-shm.mount: Deactivated successfully. Jul 2 00:23:58.622144 containerd[1714]: time="2024-07-02T00:23:58.622081472Z" level=error msg="StopPodSandbox for \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\" failed" error="failed to destroy network for sandbox \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:58.622677 kubelet[3245]: E0702 00:23:58.622448 3245 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" Jul 2 00:23:58.622677 kubelet[3245]: E0702 00:23:58.622543 3245 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee"} Jul 2 00:23:58.622677 kubelet[3245]: E0702 00:23:58.622592 3245 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"08a53d11-75a7-4cca-8c69-c90223c69562\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:23:58.622677 kubelet[3245]: E0702 00:23:58.622633 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"08a53d11-75a7-4cca-8c69-c90223c69562\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7967f9f766-kd24n" podUID="08a53d11-75a7-4cca-8c69-c90223c69562" Jul 2 00:23:58.625460 containerd[1714]: time="2024-07-02T00:23:58.625150615Z" level=error msg="StopPodSandbox for \"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\" failed" error="failed to destroy network for sandbox \"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:58.626541 kubelet[3245]: E0702 00:23:58.625662 3245 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" Jul 2 00:23:58.626541 kubelet[3245]: E0702 00:23:58.625704 3245 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d"} Jul 2 00:23:58.626541 kubelet[3245]: E0702 00:23:58.625753 3245 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"af167feb-cfad-401b-a59b-ba2e1b1a7b28\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:23:58.626541 kubelet[3245]: E0702 00:23:58.625790 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"af167feb-cfad-401b-a59b-ba2e1b1a7b28\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-zcrs9" podUID="af167feb-cfad-401b-a59b-ba2e1b1a7b28" Jul 2 00:23:58.626870 containerd[1714]: time="2024-07-02T00:23:58.626316032Z" level=error msg="StopPodSandbox for \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\" failed" error="failed to destroy network for sandbox \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:58.626922 kubelet[3245]: E0702 00:23:58.626583 3245 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" Jul 2 00:23:58.626922 kubelet[3245]: E0702 00:23:58.626614 3245 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574"} Jul 2 00:23:58.626922 kubelet[3245]: E0702 00:23:58.626670 3245 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fdca042f-71be-426a-9e20-51906df4946a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:23:58.626922 kubelet[3245]: E0702 00:23:58.626721 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fdca042f-71be-426a-9e20-51906df4946a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-24jmp" podUID="fdca042f-71be-426a-9e20-51906df4946a" Jul 2 00:23:58.634773 containerd[1714]: time="2024-07-02T00:23:58.634732950Z" level=error msg="StopPodSandbox for \"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\" failed" error="failed to destroy network for sandbox \"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:58.634908 kubelet[3245]: E0702 00:23:58.634895 3245 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" Jul 2 00:23:58.634979 kubelet[3245]: E0702 00:23:58.634928 3245 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c"} Jul 2 00:23:58.635032 kubelet[3245]: E0702 00:23:58.634983 3245 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d83f546f-f0c6-4f7a-a190-1895a85550b7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:23:58.635032 kubelet[3245]: E0702 00:23:58.635017 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d83f546f-f0c6-4f7a-a190-1895a85550b7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-46k9p" podUID="d83f546f-f0c6-4f7a-a190-1895a85550b7" Jul 2 00:24:09.360691 containerd[1714]: time="2024-07-02T00:24:09.360241636Z" level=info msg="StopPodSandbox for \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\"" Jul 2 00:24:09.385188 containerd[1714]: time="2024-07-02T00:24:09.385139477Z" level=error msg="StopPodSandbox for \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\" failed" error="failed to destroy network for sandbox \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:09.385518 kubelet[3245]: E0702 00:24:09.385486 3245 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" Jul 2 00:24:09.385917 kubelet[3245]: E0702 00:24:09.385536 3245 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574"} Jul 2 00:24:09.385917 kubelet[3245]: E0702 00:24:09.385580 3245 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fdca042f-71be-426a-9e20-51906df4946a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:24:09.385917 kubelet[3245]: E0702 00:24:09.385619 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fdca042f-71be-426a-9e20-51906df4946a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-24jmp" podUID="fdca042f-71be-426a-9e20-51906df4946a" Jul 2 00:24:11.359347 containerd[1714]: time="2024-07-02T00:24:11.359282119Z" level=info msg="StopPodSandbox for \"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\"" Jul 2 00:24:11.384561 containerd[1714]: time="2024-07-02T00:24:11.384509065Z" level=error msg="StopPodSandbox for \"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\" failed" error="failed to destroy network for sandbox \"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:11.384806 kubelet[3245]: E0702 00:24:11.384758 3245 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" Jul 2 00:24:11.384806 kubelet[3245]: E0702 00:24:11.384807 3245 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c"} Jul 2 00:24:11.385168 kubelet[3245]: E0702 00:24:11.384856 3245 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d83f546f-f0c6-4f7a-a190-1895a85550b7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:24:11.385168 kubelet[3245]: E0702 00:24:11.384893 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d83f546f-f0c6-4f7a-a190-1895a85550b7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-46k9p" podUID="d83f546f-f0c6-4f7a-a190-1895a85550b7" Jul 2 00:24:12.361734 containerd[1714]: time="2024-07-02T00:24:12.361671757Z" level=info msg="StopPodSandbox for \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\"" Jul 2 00:24:12.362581 containerd[1714]: time="2024-07-02T00:24:12.362236265Z" level=info msg="StopPodSandbox for \"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\"" Jul 2 00:24:12.402201 containerd[1714]: time="2024-07-02T00:24:12.402111511Z" level=error msg="StopPodSandbox for \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\" failed" error="failed to destroy network for sandbox \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:12.402529 containerd[1714]: time="2024-07-02T00:24:12.402383215Z" level=error msg="StopPodSandbox for \"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\" failed" error="failed to destroy network for sandbox \"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:12.402604 kubelet[3245]: E0702 00:24:12.402370 3245 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" Jul 2 00:24:12.402604 kubelet[3245]: E0702 00:24:12.402420 3245 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee"} Jul 2 00:24:12.402604 kubelet[3245]: E0702 00:24:12.402483 3245 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"08a53d11-75a7-4cca-8c69-c90223c69562\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:24:12.402604 kubelet[3245]: E0702 00:24:12.402520 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"08a53d11-75a7-4cca-8c69-c90223c69562\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7967f9f766-kd24n" podUID="08a53d11-75a7-4cca-8c69-c90223c69562" Jul 2 00:24:12.403193 kubelet[3245]: E0702 00:24:12.402750 3245 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" Jul 2 00:24:12.403193 kubelet[3245]: E0702 00:24:12.402779 3245 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d"} Jul 2 00:24:12.403193 kubelet[3245]: E0702 00:24:12.402824 3245 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"af167feb-cfad-401b-a59b-ba2e1b1a7b28\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:24:12.403193 kubelet[3245]: E0702 00:24:12.402857 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"af167feb-cfad-401b-a59b-ba2e1b1a7b28\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-zcrs9" podUID="af167feb-cfad-401b-a59b-ba2e1b1a7b28" Jul 2 00:24:23.359388 containerd[1714]: time="2024-07-02T00:24:23.359323972Z" level=info msg="StopPodSandbox for \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\"" Jul 2 00:24:23.385560 containerd[1714]: time="2024-07-02T00:24:23.385509536Z" level=error msg="StopPodSandbox for \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\" failed" error="failed to destroy network for sandbox \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:23.385898 kubelet[3245]: E0702 00:24:23.385866 3245 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" Jul 2 00:24:23.386236 kubelet[3245]: E0702 00:24:23.385920 3245 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee"} Jul 2 00:24:23.386236 kubelet[3245]: E0702 00:24:23.385966 3245 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"08a53d11-75a7-4cca-8c69-c90223c69562\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:24:23.386236 kubelet[3245]: E0702 00:24:23.386007 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"08a53d11-75a7-4cca-8c69-c90223c69562\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7967f9f766-kd24n" podUID="08a53d11-75a7-4cca-8c69-c90223c69562" Jul 2 00:24:24.360541 containerd[1714]: time="2024-07-02T00:24:24.360381497Z" level=info msg="StopPodSandbox for \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\"" Jul 2 00:24:24.391856 containerd[1714]: time="2024-07-02T00:24:24.391072124Z" level=error msg="StopPodSandbox for \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\" failed" error="failed to destroy network for sandbox \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:24.392141 kubelet[3245]: E0702 00:24:24.391502 3245 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" Jul 2 00:24:24.392141 kubelet[3245]: E0702 00:24:24.391554 3245 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574"} Jul 2 00:24:24.392141 kubelet[3245]: E0702 00:24:24.391605 3245 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fdca042f-71be-426a-9e20-51906df4946a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:24:24.392141 kubelet[3245]: E0702 00:24:24.391648 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fdca042f-71be-426a-9e20-51906df4946a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-24jmp" podUID="fdca042f-71be-426a-9e20-51906df4946a" Jul 2 00:24:25.360129 containerd[1714]: time="2024-07-02T00:24:25.359697798Z" level=info msg="StopPodSandbox for \"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\"" Jul 2 00:24:25.385416 containerd[1714]: time="2024-07-02T00:24:25.385362655Z" level=error msg="StopPodSandbox for \"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\" failed" error="failed to destroy network for sandbox \"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:25.385904 kubelet[3245]: E0702 00:24:25.385700 3245 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" Jul 2 00:24:25.385904 kubelet[3245]: E0702 00:24:25.385750 3245 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c"} Jul 2 00:24:25.385904 kubelet[3245]: E0702 00:24:25.385819 3245 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d83f546f-f0c6-4f7a-a190-1895a85550b7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:24:25.385904 kubelet[3245]: E0702 00:24:25.385870 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d83f546f-f0c6-4f7a-a190-1895a85550b7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-46k9p" podUID="d83f546f-f0c6-4f7a-a190-1895a85550b7" Jul 2 00:24:26.362257 containerd[1714]: time="2024-07-02T00:24:26.360639522Z" level=info msg="StopPodSandbox for \"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\"" Jul 2 00:24:26.389291 containerd[1714]: time="2024-07-02T00:24:26.389236420Z" level=error msg="StopPodSandbox for \"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\" failed" error="failed to destroy network for sandbox \"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:26.390206 kubelet[3245]: E0702 00:24:26.389639 3245 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" Jul 2 00:24:26.390206 kubelet[3245]: E0702 00:24:26.389687 3245 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d"} Jul 2 00:24:26.390206 kubelet[3245]: E0702 00:24:26.389761 3245 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"af167feb-cfad-401b-a59b-ba2e1b1a7b28\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:24:26.390206 kubelet[3245]: E0702 00:24:26.389823 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"af167feb-cfad-401b-a59b-ba2e1b1a7b28\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-zcrs9" podUID="af167feb-cfad-401b-a59b-ba2e1b1a7b28" Jul 2 00:24:35.360715 containerd[1714]: time="2024-07-02T00:24:35.359307140Z" level=info msg="StopPodSandbox for \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\"" Jul 2 00:24:35.389903 containerd[1714]: time="2024-07-02T00:24:35.389842567Z" level=error msg="StopPodSandbox for \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\" failed" error="failed to destroy network for sandbox \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:35.390161 kubelet[3245]: E0702 00:24:35.390117 3245 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" Jul 2 00:24:35.390585 kubelet[3245]: E0702 00:24:35.390171 3245 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574"} Jul 2 00:24:35.390585 kubelet[3245]: E0702 00:24:35.390218 3245 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fdca042f-71be-426a-9e20-51906df4946a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:24:35.390585 kubelet[3245]: E0702 00:24:35.390258 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fdca042f-71be-426a-9e20-51906df4946a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-24jmp" podUID="fdca042f-71be-426a-9e20-51906df4946a" Jul 2 00:24:38.362367 containerd[1714]: time="2024-07-02T00:24:38.361994757Z" level=info msg="StopPodSandbox for \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\"" Jul 2 00:24:38.406017 containerd[1714]: time="2024-07-02T00:24:38.405899371Z" level=error msg="StopPodSandbox for \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\" failed" error="failed to destroy network for sandbox \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:38.406177 kubelet[3245]: E0702 00:24:38.406152 3245 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" Jul 2 00:24:38.406582 kubelet[3245]: E0702 00:24:38.406198 3245 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee"} Jul 2 00:24:38.406582 kubelet[3245]: E0702 00:24:38.406247 3245 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"08a53d11-75a7-4cca-8c69-c90223c69562\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:24:38.406582 kubelet[3245]: E0702 00:24:38.406289 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"08a53d11-75a7-4cca-8c69-c90223c69562\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7967f9f766-kd24n" podUID="08a53d11-75a7-4cca-8c69-c90223c69562" Jul 2 00:24:38.919527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2776363703.mount: Deactivated successfully. Jul 2 00:24:38.960087 containerd[1714]: time="2024-07-02T00:24:38.960019325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:38.962416 containerd[1714]: time="2024-07-02T00:24:38.962369358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jul 2 00:24:38.965345 containerd[1714]: time="2024-07-02T00:24:38.965312599Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:38.970151 containerd[1714]: time="2024-07-02T00:24:38.970101366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:38.970845 containerd[1714]: time="2024-07-02T00:24:38.970678574Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 40.451319552s" Jul 2 00:24:38.970845 containerd[1714]: time="2024-07-02T00:24:38.970720075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jul 2 00:24:38.987877 containerd[1714]: time="2024-07-02T00:24:38.987833115Z" level=info msg="CreateContainer within sandbox \"d6f518b5155d15a2211cb42ad2723a40b76ff00c484302866a27691795237e27\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 00:24:39.029339 containerd[1714]: time="2024-07-02T00:24:39.029289195Z" level=info msg="CreateContainer within sandbox \"d6f518b5155d15a2211cb42ad2723a40b76ff00c484302866a27691795237e27\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fc59ab1729315835bae293ddbcd87b28c1bb18114e722a33bc837c12f5b4bb84\"" Jul 2 00:24:39.029916 containerd[1714]: time="2024-07-02T00:24:39.029886803Z" level=info msg="StartContainer for \"fc59ab1729315835bae293ddbcd87b28c1bb18114e722a33bc837c12f5b4bb84\"" Jul 2 00:24:39.059666 systemd[1]: Started cri-containerd-fc59ab1729315835bae293ddbcd87b28c1bb18114e722a33bc837c12f5b4bb84.scope - libcontainer container fc59ab1729315835bae293ddbcd87b28c1bb18114e722a33bc837c12f5b4bb84. Jul 2 00:24:39.089518 containerd[1714]: time="2024-07-02T00:24:39.089471237Z" level=info msg="StartContainer for \"fc59ab1729315835bae293ddbcd87b28c1bb18114e722a33bc837c12f5b4bb84\" returns successfully" Jul 2 00:24:39.581100 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 00:24:39.581254 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 00:24:40.363129 containerd[1714]: time="2024-07-02T00:24:40.362529351Z" level=info msg="StopPodSandbox for \"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\"" Jul 2 00:24:40.363129 containerd[1714]: time="2024-07-02T00:24:40.362605852Z" level=info msg="StopPodSandbox for \"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\"" Jul 2 00:24:40.421063 kubelet[3245]: I0702 00:24:40.420892 3245 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-mkxcg" podStartSLOduration=2.842936405 podStartE2EDuration="1m20.420830967s" podCreationTimestamp="2024-07-02 00:23:20 +0000 UTC" firstStartedPulling="2024-07-02 00:23:21.39328812 +0000 UTC m=+21.457908510" lastFinishedPulling="2024-07-02 00:24:38.971182682 +0000 UTC m=+99.035803072" observedRunningTime="2024-07-02 00:24:39.657903691 +0000 UTC m=+99.722524081" watchObservedRunningTime="2024-07-02 00:24:40.420830967 +0000 UTC m=+100.485451257" Jul 2 00:24:40.484883 containerd[1714]: 2024-07-02 00:24:40.425 [INFO][4520] k8s.go 608: Cleaning up netns ContainerID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" Jul 2 00:24:40.484883 containerd[1714]: 2024-07-02 00:24:40.426 [INFO][4520] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" iface="eth0" netns="/var/run/netns/cni-ec245480-a188-a6fe-91c6-6483189c8127" Jul 2 00:24:40.484883 containerd[1714]: 2024-07-02 00:24:40.426 [INFO][4520] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" iface="eth0" netns="/var/run/netns/cni-ec245480-a188-a6fe-91c6-6483189c8127" Jul 2 00:24:40.484883 containerd[1714]: 2024-07-02 00:24:40.426 [INFO][4520] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" iface="eth0" netns="/var/run/netns/cni-ec245480-a188-a6fe-91c6-6483189c8127" Jul 2 00:24:40.484883 containerd[1714]: 2024-07-02 00:24:40.426 [INFO][4520] k8s.go 615: Releasing IP address(es) ContainerID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" Jul 2 00:24:40.484883 containerd[1714]: 2024-07-02 00:24:40.426 [INFO][4520] utils.go 188: Calico CNI releasing IP address ContainerID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" Jul 2 00:24:40.484883 containerd[1714]: 2024-07-02 00:24:40.472 [INFO][4533] ipam_plugin.go 411: Releasing address using handleID ContainerID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" HandleID="k8s-pod-network.65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" Workload="ci--3975.1.1--a--7b42818af6-k8s-csi--node--driver--46k9p-eth0" Jul 2 00:24:40.484883 containerd[1714]: 2024-07-02 00:24:40.472 [INFO][4533] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:40.484883 containerd[1714]: 2024-07-02 00:24:40.472 [INFO][4533] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:40.484883 containerd[1714]: 2024-07-02 00:24:40.479 [WARNING][4533] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" HandleID="k8s-pod-network.65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" Workload="ci--3975.1.1--a--7b42818af6-k8s-csi--node--driver--46k9p-eth0" Jul 2 00:24:40.484883 containerd[1714]: 2024-07-02 00:24:40.479 [INFO][4533] ipam_plugin.go 439: Releasing address using workloadID ContainerID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" HandleID="k8s-pod-network.65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" Workload="ci--3975.1.1--a--7b42818af6-k8s-csi--node--driver--46k9p-eth0" Jul 2 00:24:40.484883 containerd[1714]: 2024-07-02 00:24:40.480 [INFO][4533] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:40.484883 containerd[1714]: 2024-07-02 00:24:40.481 [INFO][4520] k8s.go 621: Teardown processing complete. ContainerID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" Jul 2 00:24:40.484883 containerd[1714]: time="2024-07-02T00:24:40.482768133Z" level=info msg="TearDown network for sandbox \"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\" successfully" Jul 2 00:24:40.484883 containerd[1714]: time="2024-07-02T00:24:40.482798634Z" level=info msg="StopPodSandbox for \"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\" returns successfully" Jul 2 00:24:40.488499 containerd[1714]: time="2024-07-02T00:24:40.486751689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-46k9p,Uid:d83f546f-f0c6-4f7a-a190-1895a85550b7,Namespace:calico-system,Attempt:1,}" Jul 2 00:24:40.488219 systemd[1]: run-netns-cni\x2dec245480\x2da188\x2da6fe\x2d91c6\x2d6483189c8127.mount: Deactivated successfully. Jul 2 00:24:40.495350 containerd[1714]: 2024-07-02 00:24:40.419 [INFO][4521] k8s.go 608: Cleaning up netns ContainerID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" Jul 2 00:24:40.495350 containerd[1714]: 2024-07-02 00:24:40.420 [INFO][4521] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" iface="eth0" netns="/var/run/netns/cni-3cc6a5f2-154c-379c-520f-bd690e81401d" Jul 2 00:24:40.495350 containerd[1714]: 2024-07-02 00:24:40.420 [INFO][4521] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" iface="eth0" netns="/var/run/netns/cni-3cc6a5f2-154c-379c-520f-bd690e81401d" Jul 2 00:24:40.495350 containerd[1714]: 2024-07-02 00:24:40.421 [INFO][4521] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" iface="eth0" netns="/var/run/netns/cni-3cc6a5f2-154c-379c-520f-bd690e81401d" Jul 2 00:24:40.495350 containerd[1714]: 2024-07-02 00:24:40.421 [INFO][4521] k8s.go 615: Releasing IP address(es) ContainerID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" Jul 2 00:24:40.495350 containerd[1714]: 2024-07-02 00:24:40.421 [INFO][4521] utils.go 188: Calico CNI releasing IP address ContainerID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" Jul 2 00:24:40.495350 containerd[1714]: 2024-07-02 00:24:40.476 [INFO][4532] ipam_plugin.go 411: Releasing address using handleID ContainerID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" HandleID="k8s-pod-network.0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" Workload="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--zcrs9-eth0" Jul 2 00:24:40.495350 containerd[1714]: 2024-07-02 00:24:40.477 [INFO][4532] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:40.495350 containerd[1714]: 2024-07-02 00:24:40.480 [INFO][4532] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:40.495350 containerd[1714]: 2024-07-02 00:24:40.490 [WARNING][4532] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" HandleID="k8s-pod-network.0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" Workload="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--zcrs9-eth0" Jul 2 00:24:40.495350 containerd[1714]: 2024-07-02 00:24:40.490 [INFO][4532] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" HandleID="k8s-pod-network.0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" Workload="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--zcrs9-eth0" Jul 2 00:24:40.495350 containerd[1714]: 2024-07-02 00:24:40.492 [INFO][4532] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:40.495350 containerd[1714]: 2024-07-02 00:24:40.494 [INFO][4521] k8s.go 621: Teardown processing complete. ContainerID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" Jul 2 00:24:40.495978 containerd[1714]: time="2024-07-02T00:24:40.495489812Z" level=info msg="TearDown network for sandbox \"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\" successfully" Jul 2 00:24:40.495978 containerd[1714]: time="2024-07-02T00:24:40.495516612Z" level=info msg="StopPodSandbox for \"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\" returns successfully" Jul 2 00:24:40.496121 containerd[1714]: time="2024-07-02T00:24:40.496084620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zcrs9,Uid:af167feb-cfad-401b-a59b-ba2e1b1a7b28,Namespace:kube-system,Attempt:1,}" Jul 2 00:24:40.499499 systemd[1]: run-netns-cni\x2d3cc6a5f2\x2d154c\x2d379c\x2d520f\x2dbd690e81401d.mount: Deactivated successfully. Jul 2 00:24:40.725779 systemd-networkd[1354]: calic4b2c7274dd: Link UP Jul 2 00:24:40.728022 systemd-networkd[1354]: calic4b2c7274dd: Gained carrier Jul 2 00:24:40.748073 containerd[1714]: 2024-07-02 00:24:40.607 [INFO][4557] utils.go 100: File /var/lib/calico/mtu does not exist Jul 2 00:24:40.748073 containerd[1714]: 2024-07-02 00:24:40.618 [INFO][4557] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--zcrs9-eth0 coredns-76f75df574- kube-system af167feb-cfad-401b-a59b-ba2e1b1a7b28 812 0 2024-07-02 00:23:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.1.1-a-7b42818af6 coredns-76f75df574-zcrs9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic4b2c7274dd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1a82ad727cf2a79004cc6f24bb515a4399a0e48bd8877859c7b44d8e140f40ec" Namespace="kube-system" Pod="coredns-76f75df574-zcrs9" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--zcrs9-" Jul 2 00:24:40.748073 containerd[1714]: 2024-07-02 00:24:40.618 [INFO][4557] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1a82ad727cf2a79004cc6f24bb515a4399a0e48bd8877859c7b44d8e140f40ec" Namespace="kube-system" Pod="coredns-76f75df574-zcrs9" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--zcrs9-eth0" Jul 2 00:24:40.748073 containerd[1714]: 2024-07-02 00:24:40.668 [INFO][4575] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a82ad727cf2a79004cc6f24bb515a4399a0e48bd8877859c7b44d8e140f40ec" HandleID="k8s-pod-network.1a82ad727cf2a79004cc6f24bb515a4399a0e48bd8877859c7b44d8e140f40ec" Workload="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--zcrs9-eth0" Jul 2 00:24:40.748073 containerd[1714]: 2024-07-02 00:24:40.680 [INFO][4575] ipam_plugin.go 264: Auto assigning IP ContainerID="1a82ad727cf2a79004cc6f24bb515a4399a0e48bd8877859c7b44d8e140f40ec" HandleID="k8s-pod-network.1a82ad727cf2a79004cc6f24bb515a4399a0e48bd8877859c7b44d8e140f40ec" Workload="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--zcrs9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318320), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.1.1-a-7b42818af6", "pod":"coredns-76f75df574-zcrs9", "timestamp":"2024-07-02 00:24:40.668909638 +0000 UTC"}, Hostname:"ci-3975.1.1-a-7b42818af6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:24:40.748073 containerd[1714]: 2024-07-02 00:24:40.681 [INFO][4575] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:40.748073 containerd[1714]: 2024-07-02 00:24:40.681 [INFO][4575] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:40.748073 containerd[1714]: 2024-07-02 00:24:40.681 [INFO][4575] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-a-7b42818af6' Jul 2 00:24:40.748073 containerd[1714]: 2024-07-02 00:24:40.683 [INFO][4575] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1a82ad727cf2a79004cc6f24bb515a4399a0e48bd8877859c7b44d8e140f40ec" host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:40.748073 containerd[1714]: 2024-07-02 00:24:40.687 [INFO][4575] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:40.748073 containerd[1714]: 2024-07-02 00:24:40.693 [INFO][4575] ipam.go 489: Trying affinity for 192.168.30.64/26 host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:40.748073 containerd[1714]: 2024-07-02 00:24:40.697 [INFO][4575] ipam.go 155: Attempting to load block cidr=192.168.30.64/26 host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:40.748073 containerd[1714]: 2024-07-02 00:24:40.699 [INFO][4575] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.30.64/26 host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:40.748073 containerd[1714]: 2024-07-02 00:24:40.699 [INFO][4575] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.30.64/26 handle="k8s-pod-network.1a82ad727cf2a79004cc6f24bb515a4399a0e48bd8877859c7b44d8e140f40ec" host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:40.748073 containerd[1714]: 2024-07-02 00:24:40.700 [INFO][4575] ipam.go 1685: Creating new handle: k8s-pod-network.1a82ad727cf2a79004cc6f24bb515a4399a0e48bd8877859c7b44d8e140f40ec Jul 2 00:24:40.748073 containerd[1714]: 2024-07-02 00:24:40.704 [INFO][4575] ipam.go 1203: Writing block in order to claim IPs block=192.168.30.64/26 handle="k8s-pod-network.1a82ad727cf2a79004cc6f24bb515a4399a0e48bd8877859c7b44d8e140f40ec" host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:40.748073 containerd[1714]: 2024-07-02 00:24:40.711 [INFO][4575] ipam.go 1216: Successfully claimed IPs: [192.168.30.65/26] block=192.168.30.64/26 handle="k8s-pod-network.1a82ad727cf2a79004cc6f24bb515a4399a0e48bd8877859c7b44d8e140f40ec" host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:40.748073 containerd[1714]: 2024-07-02 00:24:40.711 [INFO][4575] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.30.65/26] handle="k8s-pod-network.1a82ad727cf2a79004cc6f24bb515a4399a0e48bd8877859c7b44d8e140f40ec" host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:40.748073 containerd[1714]: 2024-07-02 00:24:40.711 [INFO][4575] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:40.748073 containerd[1714]: 2024-07-02 00:24:40.711 [INFO][4575] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.30.65/26] IPv6=[] ContainerID="1a82ad727cf2a79004cc6f24bb515a4399a0e48bd8877859c7b44d8e140f40ec" HandleID="k8s-pod-network.1a82ad727cf2a79004cc6f24bb515a4399a0e48bd8877859c7b44d8e140f40ec" Workload="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--zcrs9-eth0" Jul 2 00:24:40.749153 containerd[1714]: 2024-07-02 00:24:40.713 [INFO][4557] k8s.go 386: Populated endpoint ContainerID="1a82ad727cf2a79004cc6f24bb515a4399a0e48bd8877859c7b44d8e140f40ec" Namespace="kube-system" Pod="coredns-76f75df574-zcrs9" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--zcrs9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--zcrs9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"af167feb-cfad-401b-a59b-ba2e1b1a7b28", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7b42818af6", ContainerID:"", Pod:"coredns-76f75df574-zcrs9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic4b2c7274dd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:40.749153 containerd[1714]: 2024-07-02 00:24:40.713 [INFO][4557] k8s.go 387: Calico CNI using IPs: [192.168.30.65/32] ContainerID="1a82ad727cf2a79004cc6f24bb515a4399a0e48bd8877859c7b44d8e140f40ec" Namespace="kube-system" Pod="coredns-76f75df574-zcrs9" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--zcrs9-eth0" Jul 2 00:24:40.749153 containerd[1714]: 2024-07-02 00:24:40.713 [INFO][4557] dataplane_linux.go 68: Setting the host side veth name to calic4b2c7274dd ContainerID="1a82ad727cf2a79004cc6f24bb515a4399a0e48bd8877859c7b44d8e140f40ec" Namespace="kube-system" Pod="coredns-76f75df574-zcrs9" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--zcrs9-eth0" Jul 2 00:24:40.749153 containerd[1714]: 2024-07-02 00:24:40.727 [INFO][4557] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1a82ad727cf2a79004cc6f24bb515a4399a0e48bd8877859c7b44d8e140f40ec" Namespace="kube-system" Pod="coredns-76f75df574-zcrs9" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--zcrs9-eth0" Jul 2 00:24:40.749153 containerd[1714]: 2024-07-02 00:24:40.729 [INFO][4557] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1a82ad727cf2a79004cc6f24bb515a4399a0e48bd8877859c7b44d8e140f40ec" Namespace="kube-system" Pod="coredns-76f75df574-zcrs9" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--zcrs9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--zcrs9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"af167feb-cfad-401b-a59b-ba2e1b1a7b28", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7b42818af6", ContainerID:"1a82ad727cf2a79004cc6f24bb515a4399a0e48bd8877859c7b44d8e140f40ec", Pod:"coredns-76f75df574-zcrs9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic4b2c7274dd", MAC:"ca:91:84:05:17:da", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:40.749153 containerd[1714]: 2024-07-02 00:24:40.744 [INFO][4557] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1a82ad727cf2a79004cc6f24bb515a4399a0e48bd8877859c7b44d8e140f40ec" Namespace="kube-system" Pod="coredns-76f75df574-zcrs9" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--zcrs9-eth0" Jul 2 00:24:40.777912 containerd[1714]: time="2024-07-02T00:24:40.775711833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:40.777912 containerd[1714]: time="2024-07-02T00:24:40.775789634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:40.777912 containerd[1714]: time="2024-07-02T00:24:40.775824234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:40.777912 containerd[1714]: time="2024-07-02T00:24:40.775843835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:40.804083 systemd[1]: Started cri-containerd-1a82ad727cf2a79004cc6f24bb515a4399a0e48bd8877859c7b44d8e140f40ec.scope - libcontainer container 1a82ad727cf2a79004cc6f24bb515a4399a0e48bd8877859c7b44d8e140f40ec. Jul 2 00:24:40.805105 systemd-networkd[1354]: calic4ec20bb180: Link UP Jul 2 00:24:40.805933 systemd-networkd[1354]: calic4ec20bb180: Gained carrier Jul 2 00:24:40.830599 containerd[1714]: 2024-07-02 00:24:40.594 [INFO][4546] utils.go 100: File /var/lib/calico/mtu does not exist Jul 2 00:24:40.830599 containerd[1714]: 2024-07-02 00:24:40.610 [INFO][4546] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--a--7b42818af6-k8s-csi--node--driver--46k9p-eth0 csi-node-driver- calico-system d83f546f-f0c6-4f7a-a190-1895a85550b7 813 0 2024-07-02 00:23:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3975.1.1-a-7b42818af6 csi-node-driver-46k9p eth0 default [] [] [kns.calico-system ksa.calico-system.default] calic4ec20bb180 [] []}} ContainerID="23eb603eff4bcef30442d735f6f9065846bfa0307e852eca8b25b8895567de64" Namespace="calico-system" Pod="csi-node-driver-46k9p" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-csi--node--driver--46k9p-" Jul 2 00:24:40.830599 containerd[1714]: 2024-07-02 00:24:40.610 [INFO][4546] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="23eb603eff4bcef30442d735f6f9065846bfa0307e852eca8b25b8895567de64" Namespace="calico-system" Pod="csi-node-driver-46k9p" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-csi--node--driver--46k9p-eth0" Jul 2 00:24:40.830599 containerd[1714]: 2024-07-02 00:24:40.664 [INFO][4570] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="23eb603eff4bcef30442d735f6f9065846bfa0307e852eca8b25b8895567de64" HandleID="k8s-pod-network.23eb603eff4bcef30442d735f6f9065846bfa0307e852eca8b25b8895567de64" Workload="ci--3975.1.1--a--7b42818af6-k8s-csi--node--driver--46k9p-eth0" Jul 2 00:24:40.830599 containerd[1714]: 2024-07-02 00:24:40.682 [INFO][4570] ipam_plugin.go 264: Auto assigning IP ContainerID="23eb603eff4bcef30442d735f6f9065846bfa0307e852eca8b25b8895567de64" HandleID="k8s-pod-network.23eb603eff4bcef30442d735f6f9065846bfa0307e852eca8b25b8895567de64" Workload="ci--3975.1.1--a--7b42818af6-k8s-csi--node--driver--46k9p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036aa30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.1.1-a-7b42818af6", "pod":"csi-node-driver-46k9p", "timestamp":"2024-07-02 00:24:40.66471658 +0000 UTC"}, Hostname:"ci-3975.1.1-a-7b42818af6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:24:40.830599 containerd[1714]: 2024-07-02 00:24:40.682 [INFO][4570] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:40.830599 containerd[1714]: 2024-07-02 00:24:40.711 [INFO][4570] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:40.830599 containerd[1714]: 2024-07-02 00:24:40.711 [INFO][4570] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-a-7b42818af6' Jul 2 00:24:40.830599 containerd[1714]: 2024-07-02 00:24:40.720 [INFO][4570] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.23eb603eff4bcef30442d735f6f9065846bfa0307e852eca8b25b8895567de64" host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:40.830599 containerd[1714]: 2024-07-02 00:24:40.730 [INFO][4570] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:40.830599 containerd[1714]: 2024-07-02 00:24:40.743 [INFO][4570] ipam.go 489: Trying affinity for 192.168.30.64/26 host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:40.830599 containerd[1714]: 2024-07-02 00:24:40.771 [INFO][4570] ipam.go 155: Attempting to load block cidr=192.168.30.64/26 host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:40.830599 containerd[1714]: 2024-07-02 00:24:40.782 [INFO][4570] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.30.64/26 host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:40.830599 containerd[1714]: 2024-07-02 00:24:40.782 [INFO][4570] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.30.64/26 handle="k8s-pod-network.23eb603eff4bcef30442d735f6f9065846bfa0307e852eca8b25b8895567de64" host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:40.830599 containerd[1714]: 2024-07-02 00:24:40.784 [INFO][4570] ipam.go 1685: Creating new handle: k8s-pod-network.23eb603eff4bcef30442d735f6f9065846bfa0307e852eca8b25b8895567de64 Jul 2 00:24:40.830599 containerd[1714]: 2024-07-02 00:24:40.789 [INFO][4570] ipam.go 1203: Writing block in order to claim IPs block=192.168.30.64/26 handle="k8s-pod-network.23eb603eff4bcef30442d735f6f9065846bfa0307e852eca8b25b8895567de64" host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:40.830599 containerd[1714]: 2024-07-02 00:24:40.798 [INFO][4570] ipam.go 1216: Successfully claimed IPs: [192.168.30.66/26] block=192.168.30.64/26 handle="k8s-pod-network.23eb603eff4bcef30442d735f6f9065846bfa0307e852eca8b25b8895567de64" host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:40.830599 containerd[1714]: 2024-07-02 00:24:40.798 [INFO][4570] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.30.66/26] handle="k8s-pod-network.23eb603eff4bcef30442d735f6f9065846bfa0307e852eca8b25b8895567de64" host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:40.830599 containerd[1714]: 2024-07-02 00:24:40.798 [INFO][4570] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:40.830599 containerd[1714]: 2024-07-02 00:24:40.798 [INFO][4570] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.30.66/26] IPv6=[] ContainerID="23eb603eff4bcef30442d735f6f9065846bfa0307e852eca8b25b8895567de64" HandleID="k8s-pod-network.23eb603eff4bcef30442d735f6f9065846bfa0307e852eca8b25b8895567de64" Workload="ci--3975.1.1--a--7b42818af6-k8s-csi--node--driver--46k9p-eth0" Jul 2 00:24:40.832020 containerd[1714]: 2024-07-02 00:24:40.802 [INFO][4546] k8s.go 386: Populated endpoint ContainerID="23eb603eff4bcef30442d735f6f9065846bfa0307e852eca8b25b8895567de64" Namespace="calico-system" Pod="csi-node-driver-46k9p" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-csi--node--driver--46k9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7b42818af6-k8s-csi--node--driver--46k9p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d83f546f-f0c6-4f7a-a190-1895a85550b7", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7b42818af6", ContainerID:"", Pod:"csi-node-driver-46k9p", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.30.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calic4ec20bb180", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:40.832020 containerd[1714]: 2024-07-02 00:24:40.802 [INFO][4546] k8s.go 387: Calico CNI using IPs: [192.168.30.66/32] ContainerID="23eb603eff4bcef30442d735f6f9065846bfa0307e852eca8b25b8895567de64" Namespace="calico-system" Pod="csi-node-driver-46k9p" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-csi--node--driver--46k9p-eth0" Jul 2 00:24:40.832020 containerd[1714]: 2024-07-02 00:24:40.802 [INFO][4546] dataplane_linux.go 68: Setting the host side veth name to calic4ec20bb180 ContainerID="23eb603eff4bcef30442d735f6f9065846bfa0307e852eca8b25b8895567de64" Namespace="calico-system" Pod="csi-node-driver-46k9p" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-csi--node--driver--46k9p-eth0" Jul 2 00:24:40.832020 containerd[1714]: 2024-07-02 00:24:40.806 [INFO][4546] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="23eb603eff4bcef30442d735f6f9065846bfa0307e852eca8b25b8895567de64" Namespace="calico-system" Pod="csi-node-driver-46k9p" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-csi--node--driver--46k9p-eth0" Jul 2 00:24:40.832020 containerd[1714]: 2024-07-02 00:24:40.808 [INFO][4546] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="23eb603eff4bcef30442d735f6f9065846bfa0307e852eca8b25b8895567de64" Namespace="calico-system" Pod="csi-node-driver-46k9p" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-csi--node--driver--46k9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7b42818af6-k8s-csi--node--driver--46k9p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d83f546f-f0c6-4f7a-a190-1895a85550b7", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7b42818af6", ContainerID:"23eb603eff4bcef30442d735f6f9065846bfa0307e852eca8b25b8895567de64", Pod:"csi-node-driver-46k9p", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.30.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calic4ec20bb180", MAC:"ee:3d:57:d7:41:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:40.832020 containerd[1714]: 2024-07-02 00:24:40.825 [INFO][4546] k8s.go 500: Wrote updated endpoint to datastore ContainerID="23eb603eff4bcef30442d735f6f9065846bfa0307e852eca8b25b8895567de64" Namespace="calico-system" Pod="csi-node-driver-46k9p" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-csi--node--driver--46k9p-eth0" Jul 2 00:24:40.867147 containerd[1714]: time="2024-07-02T00:24:40.866786407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zcrs9,Uid:af167feb-cfad-401b-a59b-ba2e1b1a7b28,Namespace:kube-system,Attempt:1,} returns sandbox id \"1a82ad727cf2a79004cc6f24bb515a4399a0e48bd8877859c7b44d8e140f40ec\"" Jul 2 00:24:40.870675 containerd[1714]: time="2024-07-02T00:24:40.870505459Z" level=info msg="CreateContainer within sandbox \"1a82ad727cf2a79004cc6f24bb515a4399a0e48bd8877859c7b44d8e140f40ec\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:24:40.873600 containerd[1714]: time="2024-07-02T00:24:40.872940193Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:40.873711 containerd[1714]: time="2024-07-02T00:24:40.873635603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:40.873711 containerd[1714]: time="2024-07-02T00:24:40.873678104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:40.873802 containerd[1714]: time="2024-07-02T00:24:40.873715304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:40.891599 systemd[1]: Started cri-containerd-23eb603eff4bcef30442d735f6f9065846bfa0307e852eca8b25b8895567de64.scope - libcontainer container 23eb603eff4bcef30442d735f6f9065846bfa0307e852eca8b25b8895567de64. Jul 2 00:24:40.914605 containerd[1714]: time="2024-07-02T00:24:40.914411274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-46k9p,Uid:d83f546f-f0c6-4f7a-a190-1895a85550b7,Namespace:calico-system,Attempt:1,} returns sandbox id \"23eb603eff4bcef30442d735f6f9065846bfa0307e852eca8b25b8895567de64\"" Jul 2 00:24:40.916953 containerd[1714]: time="2024-07-02T00:24:40.916898808Z" level=info msg="CreateContainer within sandbox \"1a82ad727cf2a79004cc6f24bb515a4399a0e48bd8877859c7b44d8e140f40ec\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8dbc506ef125b094883d0bf524d4714b14bc6f6670eff988902dee1c2bb798b8\"" Jul 2 00:24:40.917738 containerd[1714]: time="2024-07-02T00:24:40.917612618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 00:24:40.918806 containerd[1714]: time="2024-07-02T00:24:40.918653833Z" level=info msg="StartContainer for \"8dbc506ef125b094883d0bf524d4714b14bc6f6670eff988902dee1c2bb798b8\"" Jul 2 00:24:40.967569 systemd[1]: Started cri-containerd-8dbc506ef125b094883d0bf524d4714b14bc6f6670eff988902dee1c2bb798b8.scope - libcontainer container 8dbc506ef125b094883d0bf524d4714b14bc6f6670eff988902dee1c2bb798b8. Jul 2 00:24:40.993175 containerd[1714]: time="2024-07-02T00:24:40.993045674Z" level=info msg="StartContainer for \"8dbc506ef125b094883d0bf524d4714b14bc6f6670eff988902dee1c2bb798b8\" returns successfully" Jul 2 00:24:41.506452 systemd-networkd[1354]: vxlan.calico: Link UP Jul 2 00:24:41.506464 systemd-networkd[1354]: vxlan.calico: Gained carrier Jul 2 00:24:41.649486 kubelet[3245]: I0702 00:24:41.649241 3245 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-zcrs9" podStartSLOduration=88.649194756 podStartE2EDuration="1m28.649194756s" podCreationTimestamp="2024-07-02 00:23:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:24:41.648218642 +0000 UTC m=+101.712838932" watchObservedRunningTime="2024-07-02 00:24:41.649194756 +0000 UTC m=+101.713815146" Jul 2 00:24:42.506558 update_engine[1693]: I0702 00:24:42.505544 1693 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 2 00:24:42.506558 update_engine[1693]: I0702 00:24:42.505589 1693 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 2 00:24:42.506558 update_engine[1693]: I0702 00:24:42.505779 1693 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 2 00:24:42.506558 update_engine[1693]: I0702 00:24:42.506311 1693 omaha_request_params.cc:62] Current group set to beta Jul 2 00:24:42.506558 update_engine[1693]: I0702 00:24:42.506465 1693 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 2 00:24:42.506558 update_engine[1693]: I0702 00:24:42.506474 1693 update_attempter.cc:643] Scheduling an action processor start. Jul 2 00:24:42.506558 update_engine[1693]: I0702 00:24:42.506491 1693 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 00:24:42.507168 update_engine[1693]: I0702 00:24:42.506675 1693 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 2 00:24:42.507168 update_engine[1693]: I0702 00:24:42.506759 1693 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 2 00:24:42.507168 update_engine[1693]: I0702 00:24:42.506767 1693 omaha_request_action.cc:272] Request: Jul 2 00:24:42.507168 update_engine[1693]: Jul 2 00:24:42.507168 update_engine[1693]: Jul 2 00:24:42.507168 update_engine[1693]: Jul 2 00:24:42.507168 update_engine[1693]: Jul 2 00:24:42.507168 update_engine[1693]: Jul 2 00:24:42.507168 update_engine[1693]: Jul 2 00:24:42.507168 update_engine[1693]: Jul 2 00:24:42.507168 update_engine[1693]: Jul 2 00:24:42.507168 update_engine[1693]: I0702 00:24:42.506773 1693 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 00:24:42.508317 locksmithd[1749]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 2 00:24:42.508854 update_engine[1693]: I0702 00:24:42.508236 1693 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 00:24:42.508854 update_engine[1693]: I0702 00:24:42.508677 1693 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 00:24:42.525443 update_engine[1693]: E0702 00:24:42.525411 1693 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 00:24:42.525546 update_engine[1693]: I0702 00:24:42.525498 1693 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 2 00:24:42.580630 systemd-networkd[1354]: calic4b2c7274dd: Gained IPv6LL Jul 2 00:24:42.645646 systemd-networkd[1354]: calic4ec20bb180: Gained IPv6LL Jul 2 00:24:42.719628 containerd[1714]: time="2024-07-02T00:24:42.719573730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:42.723728 containerd[1714]: time="2024-07-02T00:24:42.723654987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jul 2 00:24:42.729545 containerd[1714]: time="2024-07-02T00:24:42.729480568Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:42.734301 containerd[1714]: time="2024-07-02T00:24:42.734269235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:42.735034 containerd[1714]: time="2024-07-02T00:24:42.734884344Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 1.817133823s" Jul 2 00:24:42.735034 containerd[1714]: time="2024-07-02T00:24:42.734924744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jul 2 00:24:42.737125 containerd[1714]: time="2024-07-02T00:24:42.736929372Z" level=info msg="CreateContainer within sandbox \"23eb603eff4bcef30442d735f6f9065846bfa0307e852eca8b25b8895567de64\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 00:24:42.777303 containerd[1714]: time="2024-07-02T00:24:42.777161134Z" level=info msg="CreateContainer within sandbox \"23eb603eff4bcef30442d735f6f9065846bfa0307e852eca8b25b8895567de64\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"76be16ea324e44d18d96f4758f12947a7242094d21402c54ba3ce99183cbf3c2\"" Jul 2 00:24:42.777872 containerd[1714]: time="2024-07-02T00:24:42.777841443Z" level=info msg="StartContainer for \"76be16ea324e44d18d96f4758f12947a7242094d21402c54ba3ce99183cbf3c2\"" Jul 2 00:24:42.816603 systemd[1]: Started cri-containerd-76be16ea324e44d18d96f4758f12947a7242094d21402c54ba3ce99183cbf3c2.scope - libcontainer container 76be16ea324e44d18d96f4758f12947a7242094d21402c54ba3ce99183cbf3c2. Jul 2 00:24:42.836615 systemd-networkd[1354]: vxlan.calico: Gained IPv6LL Jul 2 00:24:42.846707 containerd[1714]: time="2024-07-02T00:24:42.846668405Z" level=info msg="StartContainer for \"76be16ea324e44d18d96f4758f12947a7242094d21402c54ba3ce99183cbf3c2\" returns successfully" Jul 2 00:24:42.847900 containerd[1714]: time="2024-07-02T00:24:42.847818721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 00:24:44.764990 containerd[1714]: time="2024-07-02T00:24:44.764937696Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:44.767127 containerd[1714]: time="2024-07-02T00:24:44.767063025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jul 2 00:24:44.773927 containerd[1714]: time="2024-07-02T00:24:44.772771805Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:44.777971 containerd[1714]: time="2024-07-02T00:24:44.777933177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:44.778632 containerd[1714]: time="2024-07-02T00:24:44.778595886Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 1.930687664s" Jul 2 00:24:44.778724 containerd[1714]: time="2024-07-02T00:24:44.778638287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jul 2 00:24:44.780255 containerd[1714]: time="2024-07-02T00:24:44.780227409Z" level=info msg="CreateContainer within sandbox \"23eb603eff4bcef30442d735f6f9065846bfa0307e852eca8b25b8895567de64\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 00:24:44.844517 containerd[1714]: time="2024-07-02T00:24:44.844471306Z" level=info msg="CreateContainer within sandbox \"23eb603eff4bcef30442d735f6f9065846bfa0307e852eca8b25b8895567de64\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d0ab00ca954bacbdabeff99928819ed356db3666dff0db201a444dc02c34dcae\"" Jul 2 00:24:44.845039 containerd[1714]: time="2024-07-02T00:24:44.845006014Z" level=info msg="StartContainer for \"d0ab00ca954bacbdabeff99928819ed356db3666dff0db201a444dc02c34dcae\"" Jul 2 00:24:44.876583 systemd[1]: Started cri-containerd-d0ab00ca954bacbdabeff99928819ed356db3666dff0db201a444dc02c34dcae.scope - libcontainer container d0ab00ca954bacbdabeff99928819ed356db3666dff0db201a444dc02c34dcae. Jul 2 00:24:44.920962 containerd[1714]: time="2024-07-02T00:24:44.920921474Z" level=info msg="StartContainer for \"d0ab00ca954bacbdabeff99928819ed356db3666dff0db201a444dc02c34dcae\" returns successfully" Jul 2 00:24:45.470839 kubelet[3245]: I0702 00:24:45.470801 3245 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 00:24:45.470839 kubelet[3245]: I0702 00:24:45.470845 3245 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 00:24:45.660548 kubelet[3245]: I0702 00:24:45.660511 3245 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-46k9p" podStartSLOduration=81.798723624 podStartE2EDuration="1m25.660463603s" podCreationTimestamp="2024-07-02 00:23:20 +0000 UTC" firstStartedPulling="2024-07-02 00:24:40.917158712 +0000 UTC m=+100.981779102" lastFinishedPulling="2024-07-02 00:24:44.778898791 +0000 UTC m=+104.843519081" observedRunningTime="2024-07-02 00:24:45.659888395 +0000 UTC m=+105.724508685" watchObservedRunningTime="2024-07-02 00:24:45.660463603 +0000 UTC m=+105.725083993" Jul 2 00:24:49.359949 containerd[1714]: time="2024-07-02T00:24:49.359684067Z" level=info msg="StopPodSandbox for \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\"" Jul 2 00:24:49.429018 containerd[1714]: 2024-07-02 00:24:49.399 [INFO][5050] k8s.go 608: Cleaning up netns ContainerID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" Jul 2 00:24:49.429018 containerd[1714]: 2024-07-02 00:24:49.399 [INFO][5050] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" iface="eth0" netns="/var/run/netns/cni-9479b837-6387-3937-b5f6-d9f2ed8338f4" Jul 2 00:24:49.429018 containerd[1714]: 2024-07-02 00:24:49.401 [INFO][5050] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" iface="eth0" netns="/var/run/netns/cni-9479b837-6387-3937-b5f6-d9f2ed8338f4" Jul 2 00:24:49.429018 containerd[1714]: 2024-07-02 00:24:49.401 [INFO][5050] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" iface="eth0" netns="/var/run/netns/cni-9479b837-6387-3937-b5f6-d9f2ed8338f4" Jul 2 00:24:49.429018 containerd[1714]: 2024-07-02 00:24:49.401 [INFO][5050] k8s.go 615: Releasing IP address(es) ContainerID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" Jul 2 00:24:49.429018 containerd[1714]: 2024-07-02 00:24:49.401 [INFO][5050] utils.go 188: Calico CNI releasing IP address ContainerID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" Jul 2 00:24:49.429018 containerd[1714]: 2024-07-02 00:24:49.420 [INFO][5057] ipam_plugin.go 411: Releasing address using handleID ContainerID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" HandleID="k8s-pod-network.52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" Workload="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--24jmp-eth0" Jul 2 00:24:49.429018 containerd[1714]: 2024-07-02 00:24:49.420 [INFO][5057] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:49.429018 containerd[1714]: 2024-07-02 00:24:49.421 [INFO][5057] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:49.429018 containerd[1714]: 2024-07-02 00:24:49.425 [WARNING][5057] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" HandleID="k8s-pod-network.52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" Workload="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--24jmp-eth0" Jul 2 00:24:49.429018 containerd[1714]: 2024-07-02 00:24:49.425 [INFO][5057] ipam_plugin.go 439: Releasing address using workloadID ContainerID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" HandleID="k8s-pod-network.52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" Workload="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--24jmp-eth0" Jul 2 00:24:49.429018 containerd[1714]: 2024-07-02 00:24:49.427 [INFO][5057] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:49.429018 containerd[1714]: 2024-07-02 00:24:49.428 [INFO][5050] k8s.go 621: Teardown processing complete. ContainerID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" Jul 2 00:24:49.431689 containerd[1714]: time="2024-07-02T00:24:49.431632572Z" level=info msg="TearDown network for sandbox \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\" successfully" Jul 2 00:24:49.431689 containerd[1714]: time="2024-07-02T00:24:49.431682672Z" level=info msg="StopPodSandbox for \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\" returns successfully" Jul 2 00:24:49.433175 containerd[1714]: time="2024-07-02T00:24:49.432885989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-24jmp,Uid:fdca042f-71be-426a-9e20-51906df4946a,Namespace:kube-system,Attempt:1,}" Jul 2 00:24:49.434218 systemd[1]: run-netns-cni\x2d9479b837\x2d6387\x2d3937\x2db5f6\x2dd9f2ed8338f4.mount: Deactivated successfully. Jul 2 00:24:49.576032 systemd-networkd[1354]: calid76f4af802b: Link UP Jul 2 00:24:49.577012 systemd-networkd[1354]: calid76f4af802b: Gained carrier Jul 2 00:24:49.591538 containerd[1714]: 2024-07-02 00:24:49.520 [INFO][5063] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--24jmp-eth0 coredns-76f75df574- kube-system fdca042f-71be-426a-9e20-51906df4946a 856 0 2024-07-02 00:23:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.1.1-a-7b42818af6 coredns-76f75df574-24jmp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid76f4af802b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7bf6fcf4b3637b4584cefb16efcc1b07b5c04491f4d0128660caff56c5c22bfd" Namespace="kube-system" Pod="coredns-76f75df574-24jmp" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--24jmp-" Jul 2 00:24:49.591538 containerd[1714]: 2024-07-02 00:24:49.520 [INFO][5063] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7bf6fcf4b3637b4584cefb16efcc1b07b5c04491f4d0128660caff56c5c22bfd" Namespace="kube-system" Pod="coredns-76f75df574-24jmp" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--24jmp-eth0" Jul 2 00:24:49.591538 containerd[1714]: 2024-07-02 00:24:49.544 [INFO][5074] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7bf6fcf4b3637b4584cefb16efcc1b07b5c04491f4d0128660caff56c5c22bfd" HandleID="k8s-pod-network.7bf6fcf4b3637b4584cefb16efcc1b07b5c04491f4d0128660caff56c5c22bfd" Workload="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--24jmp-eth0" Jul 2 00:24:49.591538 containerd[1714]: 2024-07-02 00:24:49.552 [INFO][5074] ipam_plugin.go 264: Auto assigning IP ContainerID="7bf6fcf4b3637b4584cefb16efcc1b07b5c04491f4d0128660caff56c5c22bfd" HandleID="k8s-pod-network.7bf6fcf4b3637b4584cefb16efcc1b07b5c04491f4d0128660caff56c5c22bfd" Workload="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--24jmp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000310760), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.1.1-a-7b42818af6", "pod":"coredns-76f75df574-24jmp", "timestamp":"2024-07-02 00:24:49.54465285 +0000 UTC"}, Hostname:"ci-3975.1.1-a-7b42818af6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:24:49.591538 containerd[1714]: 2024-07-02 00:24:49.552 [INFO][5074] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:49.591538 containerd[1714]: 2024-07-02 00:24:49.552 [INFO][5074] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:49.591538 containerd[1714]: 2024-07-02 00:24:49.552 [INFO][5074] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-a-7b42818af6' Jul 2 00:24:49.591538 containerd[1714]: 2024-07-02 00:24:49.554 [INFO][5074] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7bf6fcf4b3637b4584cefb16efcc1b07b5c04491f4d0128660caff56c5c22bfd" host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:49.591538 containerd[1714]: 2024-07-02 00:24:49.557 [INFO][5074] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:49.591538 containerd[1714]: 2024-07-02 00:24:49.560 [INFO][5074] ipam.go 489: Trying affinity for 192.168.30.64/26 host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:49.591538 containerd[1714]: 2024-07-02 00:24:49.561 [INFO][5074] ipam.go 155: Attempting to load block cidr=192.168.30.64/26 host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:49.591538 containerd[1714]: 2024-07-02 00:24:49.563 [INFO][5074] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.30.64/26 host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:49.591538 containerd[1714]: 2024-07-02 00:24:49.563 [INFO][5074] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.30.64/26 handle="k8s-pod-network.7bf6fcf4b3637b4584cefb16efcc1b07b5c04491f4d0128660caff56c5c22bfd" host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:49.591538 containerd[1714]: 2024-07-02 00:24:49.564 [INFO][5074] ipam.go 1685: Creating new handle: k8s-pod-network.7bf6fcf4b3637b4584cefb16efcc1b07b5c04491f4d0128660caff56c5c22bfd Jul 2 00:24:49.591538 containerd[1714]: 2024-07-02 00:24:49.567 [INFO][5074] ipam.go 1203: Writing block in order to claim IPs block=192.168.30.64/26 handle="k8s-pod-network.7bf6fcf4b3637b4584cefb16efcc1b07b5c04491f4d0128660caff56c5c22bfd" host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:49.591538 containerd[1714]: 2024-07-02 00:24:49.571 [INFO][5074] ipam.go 1216: Successfully claimed IPs: [192.168.30.67/26] block=192.168.30.64/26 handle="k8s-pod-network.7bf6fcf4b3637b4584cefb16efcc1b07b5c04491f4d0128660caff56c5c22bfd" host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:49.591538 containerd[1714]: 2024-07-02 00:24:49.571 [INFO][5074] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.30.67/26] handle="k8s-pod-network.7bf6fcf4b3637b4584cefb16efcc1b07b5c04491f4d0128660caff56c5c22bfd" host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:49.591538 containerd[1714]: 2024-07-02 00:24:49.571 [INFO][5074] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:49.591538 containerd[1714]: 2024-07-02 00:24:49.571 [INFO][5074] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.30.67/26] IPv6=[] ContainerID="7bf6fcf4b3637b4584cefb16efcc1b07b5c04491f4d0128660caff56c5c22bfd" HandleID="k8s-pod-network.7bf6fcf4b3637b4584cefb16efcc1b07b5c04491f4d0128660caff56c5c22bfd" Workload="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--24jmp-eth0" Jul 2 00:24:49.593968 containerd[1714]: 2024-07-02 00:24:49.573 [INFO][5063] k8s.go 386: Populated endpoint ContainerID="7bf6fcf4b3637b4584cefb16efcc1b07b5c04491f4d0128660caff56c5c22bfd" Namespace="kube-system" Pod="coredns-76f75df574-24jmp" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--24jmp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--24jmp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fdca042f-71be-426a-9e20-51906df4946a", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7b42818af6", ContainerID:"", Pod:"coredns-76f75df574-24jmp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid76f4af802b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:49.593968 containerd[1714]: 2024-07-02 00:24:49.573 [INFO][5063] k8s.go 387: Calico CNI using IPs: [192.168.30.67/32] ContainerID="7bf6fcf4b3637b4584cefb16efcc1b07b5c04491f4d0128660caff56c5c22bfd" Namespace="kube-system" Pod="coredns-76f75df574-24jmp" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--24jmp-eth0" Jul 2 00:24:49.593968 containerd[1714]: 2024-07-02 00:24:49.573 [INFO][5063] dataplane_linux.go 68: Setting the host side veth name to calid76f4af802b ContainerID="7bf6fcf4b3637b4584cefb16efcc1b07b5c04491f4d0128660caff56c5c22bfd" Namespace="kube-system" Pod="coredns-76f75df574-24jmp" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--24jmp-eth0" Jul 2 00:24:49.593968 containerd[1714]: 2024-07-02 00:24:49.576 [INFO][5063] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7bf6fcf4b3637b4584cefb16efcc1b07b5c04491f4d0128660caff56c5c22bfd" Namespace="kube-system" Pod="coredns-76f75df574-24jmp" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--24jmp-eth0" Jul 2 00:24:49.593968 containerd[1714]: 2024-07-02 00:24:49.578 [INFO][5063] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7bf6fcf4b3637b4584cefb16efcc1b07b5c04491f4d0128660caff56c5c22bfd" Namespace="kube-system" Pod="coredns-76f75df574-24jmp" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--24jmp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--24jmp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fdca042f-71be-426a-9e20-51906df4946a", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7b42818af6", ContainerID:"7bf6fcf4b3637b4584cefb16efcc1b07b5c04491f4d0128660caff56c5c22bfd", Pod:"coredns-76f75df574-24jmp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid76f4af802b", MAC:"8e:67:f4:13:9d:d5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:49.593968 containerd[1714]: 2024-07-02 00:24:49.586 [INFO][5063] k8s.go 500: Wrote updated endpoint to datastore ContainerID="7bf6fcf4b3637b4584cefb16efcc1b07b5c04491f4d0128660caff56c5c22bfd" Namespace="kube-system" Pod="coredns-76f75df574-24jmp" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--24jmp-eth0" Jul 2 00:24:49.636336 containerd[1714]: time="2024-07-02T00:24:49.626776397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:49.636336 containerd[1714]: time="2024-07-02T00:24:49.626836298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:49.636336 containerd[1714]: time="2024-07-02T00:24:49.626854698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:49.636336 containerd[1714]: time="2024-07-02T00:24:49.626867998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:49.658637 systemd[1]: Started cri-containerd-7bf6fcf4b3637b4584cefb16efcc1b07b5c04491f4d0128660caff56c5c22bfd.scope - libcontainer container 7bf6fcf4b3637b4584cefb16efcc1b07b5c04491f4d0128660caff56c5c22bfd. Jul 2 00:24:49.699496 containerd[1714]: time="2024-07-02T00:24:49.699462512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-24jmp,Uid:fdca042f-71be-426a-9e20-51906df4946a,Namespace:kube-system,Attempt:1,} returns sandbox id \"7bf6fcf4b3637b4584cefb16efcc1b07b5c04491f4d0128660caff56c5c22bfd\"" Jul 2 00:24:49.703176 containerd[1714]: time="2024-07-02T00:24:49.703131663Z" level=info msg="CreateContainer within sandbox \"7bf6fcf4b3637b4584cefb16efcc1b07b5c04491f4d0128660caff56c5c22bfd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:24:49.741405 containerd[1714]: time="2024-07-02T00:24:49.741363797Z" level=info msg="CreateContainer within sandbox \"7bf6fcf4b3637b4584cefb16efcc1b07b5c04491f4d0128660caff56c5c22bfd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"139025fd0b391ed0544ee6b7d0fd8c583cb1b03a63c0d5b9b8181a0fef4a3671\"" Jul 2 00:24:49.741968 containerd[1714]: time="2024-07-02T00:24:49.741831604Z" level=info msg="StartContainer for \"139025fd0b391ed0544ee6b7d0fd8c583cb1b03a63c0d5b9b8181a0fef4a3671\"" Jul 2 00:24:49.768651 systemd[1]: Started cri-containerd-139025fd0b391ed0544ee6b7d0fd8c583cb1b03a63c0d5b9b8181a0fef4a3671.scope - libcontainer container 139025fd0b391ed0544ee6b7d0fd8c583cb1b03a63c0d5b9b8181a0fef4a3671. Jul 2 00:24:49.798328 containerd[1714]: time="2024-07-02T00:24:49.798279992Z" level=info msg="StartContainer for \"139025fd0b391ed0544ee6b7d0fd8c583cb1b03a63c0d5b9b8181a0fef4a3671\" returns successfully" Jul 2 00:24:50.673382 kubelet[3245]: I0702 00:24:50.673340 3245 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-24jmp" podStartSLOduration=97.673297787 podStartE2EDuration="1m37.673297787s" podCreationTimestamp="2024-07-02 00:23:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:24:50.672539377 +0000 UTC m=+110.737159667" watchObservedRunningTime="2024-07-02 00:24:50.673297787 +0000 UTC m=+110.737918077" Jul 2 00:24:51.284763 systemd-networkd[1354]: calid76f4af802b: Gained IPv6LL Jul 2 00:24:52.361976 containerd[1714]: time="2024-07-02T00:24:52.359980866Z" level=info msg="StopPodSandbox for \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\"" Jul 2 00:24:52.430220 containerd[1714]: 2024-07-02 00:24:52.400 [INFO][5204] k8s.go 608: Cleaning up netns ContainerID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" Jul 2 00:24:52.430220 containerd[1714]: 2024-07-02 00:24:52.402 [INFO][5204] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" iface="eth0" netns="/var/run/netns/cni-f4cf5559-bb66-24a9-e2ad-9d2d6764e047" Jul 2 00:24:52.430220 containerd[1714]: 2024-07-02 00:24:52.402 [INFO][5204] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" iface="eth0" netns="/var/run/netns/cni-f4cf5559-bb66-24a9-e2ad-9d2d6764e047" Jul 2 00:24:52.430220 containerd[1714]: 2024-07-02 00:24:52.402 [INFO][5204] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" iface="eth0" netns="/var/run/netns/cni-f4cf5559-bb66-24a9-e2ad-9d2d6764e047" Jul 2 00:24:52.430220 containerd[1714]: 2024-07-02 00:24:52.402 [INFO][5204] k8s.go 615: Releasing IP address(es) ContainerID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" Jul 2 00:24:52.430220 containerd[1714]: 2024-07-02 00:24:52.402 [INFO][5204] utils.go 188: Calico CNI releasing IP address ContainerID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" Jul 2 00:24:52.430220 containerd[1714]: 2024-07-02 00:24:52.421 [INFO][5210] ipam_plugin.go 411: Releasing address using handleID ContainerID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" HandleID="k8s-pod-network.834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" Workload="ci--3975.1.1--a--7b42818af6-k8s-calico--kube--controllers--7967f9f766--kd24n-eth0" Jul 2 00:24:52.430220 containerd[1714]: 2024-07-02 00:24:52.421 [INFO][5210] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:52.430220 containerd[1714]: 2024-07-02 00:24:52.421 [INFO][5210] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:52.430220 containerd[1714]: 2024-07-02 00:24:52.426 [WARNING][5210] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" HandleID="k8s-pod-network.834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" Workload="ci--3975.1.1--a--7b42818af6-k8s-calico--kube--controllers--7967f9f766--kd24n-eth0" Jul 2 00:24:52.430220 containerd[1714]: 2024-07-02 00:24:52.426 [INFO][5210] ipam_plugin.go 439: Releasing address using workloadID ContainerID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" HandleID="k8s-pod-network.834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" Workload="ci--3975.1.1--a--7b42818af6-k8s-calico--kube--controllers--7967f9f766--kd24n-eth0" Jul 2 00:24:52.430220 containerd[1714]: 2024-07-02 00:24:52.427 [INFO][5210] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:52.430220 containerd[1714]: 2024-07-02 00:24:52.428 [INFO][5204] k8s.go 621: Teardown processing complete. ContainerID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" Jul 2 00:24:52.433731 containerd[1714]: time="2024-07-02T00:24:52.431583346Z" level=info msg="TearDown network for sandbox \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\" successfully" Jul 2 00:24:52.433731 containerd[1714]: time="2024-07-02T00:24:52.431634247Z" level=info msg="StopPodSandbox for \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\" returns successfully" Jul 2 00:24:52.433731 containerd[1714]: time="2024-07-02T00:24:52.433653574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7967f9f766-kd24n,Uid:08a53d11-75a7-4cca-8c69-c90223c69562,Namespace:calico-system,Attempt:1,}" Jul 2 00:24:52.434147 systemd[1]: run-netns-cni\x2df4cf5559\x2dbb66\x2d24a9\x2de2ad\x2d9d2d6764e047.mount: Deactivated successfully. Jul 2 00:24:52.506587 update_engine[1693]: I0702 00:24:52.506109 1693 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 00:24:52.506587 update_engine[1693]: I0702 00:24:52.506302 1693 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 00:24:52.506587 update_engine[1693]: I0702 00:24:52.506536 1693 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 00:24:52.511499 update_engine[1693]: E0702 00:24:52.511413 1693 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 00:24:52.511915 update_engine[1693]: I0702 00:24:52.511827 1693 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 2 00:24:52.594886 systemd-networkd[1354]: cali6394870b44e: Link UP Jul 2 00:24:52.595144 systemd-networkd[1354]: cali6394870b44e: Gained carrier Jul 2 00:24:52.614092 containerd[1714]: 2024-07-02 00:24:52.512 [INFO][5216] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--a--7b42818af6-k8s-calico--kube--controllers--7967f9f766--kd24n-eth0 calico-kube-controllers-7967f9f766- calico-system 08a53d11-75a7-4cca-8c69-c90223c69562 876 0 2024-07-02 00:23:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7967f9f766 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3975.1.1-a-7b42818af6 calico-kube-controllers-7967f9f766-kd24n eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6394870b44e [] []}} ContainerID="e63cc9da774e1419ab3822862ea1fa13fa3460cf4aaefc56d8b6e08609043f32" Namespace="calico-system" Pod="calico-kube-controllers-7967f9f766-kd24n" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-calico--kube--controllers--7967f9f766--kd24n-" Jul 2 00:24:52.614092 containerd[1714]: 2024-07-02 00:24:52.512 [INFO][5216] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e63cc9da774e1419ab3822862ea1fa13fa3460cf4aaefc56d8b6e08609043f32" Namespace="calico-system" Pod="calico-kube-controllers-7967f9f766-kd24n" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-calico--kube--controllers--7967f9f766--kd24n-eth0" Jul 2 00:24:52.614092 containerd[1714]: 2024-07-02 00:24:52.554 [INFO][5227] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e63cc9da774e1419ab3822862ea1fa13fa3460cf4aaefc56d8b6e08609043f32" HandleID="k8s-pod-network.e63cc9da774e1419ab3822862ea1fa13fa3460cf4aaefc56d8b6e08609043f32" Workload="ci--3975.1.1--a--7b42818af6-k8s-calico--kube--controllers--7967f9f766--kd24n-eth0" Jul 2 00:24:52.614092 containerd[1714]: 2024-07-02 00:24:52.562 [INFO][5227] ipam_plugin.go 264: Auto assigning IP ContainerID="e63cc9da774e1419ab3822862ea1fa13fa3460cf4aaefc56d8b6e08609043f32" HandleID="k8s-pod-network.e63cc9da774e1419ab3822862ea1fa13fa3460cf4aaefc56d8b6e08609043f32" Workload="ci--3975.1.1--a--7b42818af6-k8s-calico--kube--controllers--7967f9f766--kd24n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000501b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.1.1-a-7b42818af6", "pod":"calico-kube-controllers-7967f9f766-kd24n", "timestamp":"2024-07-02 00:24:52.554013921 +0000 UTC"}, Hostname:"ci-3975.1.1-a-7b42818af6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:24:52.614092 containerd[1714]: 2024-07-02 00:24:52.562 [INFO][5227] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:52.614092 containerd[1714]: 2024-07-02 00:24:52.562 [INFO][5227] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:52.614092 containerd[1714]: 2024-07-02 00:24:52.562 [INFO][5227] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-a-7b42818af6' Jul 2 00:24:52.614092 containerd[1714]: 2024-07-02 00:24:52.564 [INFO][5227] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e63cc9da774e1419ab3822862ea1fa13fa3460cf4aaefc56d8b6e08609043f32" host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:52.614092 containerd[1714]: 2024-07-02 00:24:52.567 [INFO][5227] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:52.614092 containerd[1714]: 2024-07-02 00:24:52.571 [INFO][5227] ipam.go 489: Trying affinity for 192.168.30.64/26 host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:52.614092 containerd[1714]: 2024-07-02 00:24:52.573 [INFO][5227] ipam.go 155: Attempting to load block cidr=192.168.30.64/26 host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:52.614092 containerd[1714]: 2024-07-02 00:24:52.575 [INFO][5227] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.30.64/26 host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:52.614092 containerd[1714]: 2024-07-02 00:24:52.576 [INFO][5227] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.30.64/26 handle="k8s-pod-network.e63cc9da774e1419ab3822862ea1fa13fa3460cf4aaefc56d8b6e08609043f32" host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:52.614092 containerd[1714]: 2024-07-02 00:24:52.577 [INFO][5227] ipam.go 1685: Creating new handle: k8s-pod-network.e63cc9da774e1419ab3822862ea1fa13fa3460cf4aaefc56d8b6e08609043f32 Jul 2 00:24:52.614092 containerd[1714]: 2024-07-02 00:24:52.580 [INFO][5227] ipam.go 1203: Writing block in order to claim IPs block=192.168.30.64/26 handle="k8s-pod-network.e63cc9da774e1419ab3822862ea1fa13fa3460cf4aaefc56d8b6e08609043f32" host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:52.614092 containerd[1714]: 2024-07-02 00:24:52.585 [INFO][5227] ipam.go 1216: Successfully claimed IPs: [192.168.30.68/26] block=192.168.30.64/26 handle="k8s-pod-network.e63cc9da774e1419ab3822862ea1fa13fa3460cf4aaefc56d8b6e08609043f32" host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:52.614092 containerd[1714]: 2024-07-02 00:24:52.586 [INFO][5227] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.30.68/26] handle="k8s-pod-network.e63cc9da774e1419ab3822862ea1fa13fa3460cf4aaefc56d8b6e08609043f32" host="ci-3975.1.1-a-7b42818af6" Jul 2 00:24:52.614092 containerd[1714]: 2024-07-02 00:24:52.586 [INFO][5227] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:52.614092 containerd[1714]: 2024-07-02 00:24:52.586 [INFO][5227] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.30.68/26] IPv6=[] ContainerID="e63cc9da774e1419ab3822862ea1fa13fa3460cf4aaefc56d8b6e08609043f32" HandleID="k8s-pod-network.e63cc9da774e1419ab3822862ea1fa13fa3460cf4aaefc56d8b6e08609043f32" Workload="ci--3975.1.1--a--7b42818af6-k8s-calico--kube--controllers--7967f9f766--kd24n-eth0" Jul 2 00:24:52.615037 containerd[1714]: 2024-07-02 00:24:52.588 [INFO][5216] k8s.go 386: Populated endpoint ContainerID="e63cc9da774e1419ab3822862ea1fa13fa3460cf4aaefc56d8b6e08609043f32" Namespace="calico-system" Pod="calico-kube-controllers-7967f9f766-kd24n" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-calico--kube--controllers--7967f9f766--kd24n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7b42818af6-k8s-calico--kube--controllers--7967f9f766--kd24n-eth0", GenerateName:"calico-kube-controllers-7967f9f766-", Namespace:"calico-system", SelfLink:"", UID:"08a53d11-75a7-4cca-8c69-c90223c69562", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7967f9f766", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7b42818af6", ContainerID:"", Pod:"calico-kube-controllers-7967f9f766-kd24n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.30.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6394870b44e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:52.615037 containerd[1714]: 2024-07-02 00:24:52.588 [INFO][5216] k8s.go 387: Calico CNI using IPs: [192.168.30.68/32] ContainerID="e63cc9da774e1419ab3822862ea1fa13fa3460cf4aaefc56d8b6e08609043f32" Namespace="calico-system" Pod="calico-kube-controllers-7967f9f766-kd24n" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-calico--kube--controllers--7967f9f766--kd24n-eth0" Jul 2 00:24:52.615037 containerd[1714]: 2024-07-02 00:24:52.588 [INFO][5216] dataplane_linux.go 68: Setting the host side veth name to cali6394870b44e ContainerID="e63cc9da774e1419ab3822862ea1fa13fa3460cf4aaefc56d8b6e08609043f32" Namespace="calico-system" Pod="calico-kube-controllers-7967f9f766-kd24n" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-calico--kube--controllers--7967f9f766--kd24n-eth0" Jul 2 00:24:52.615037 containerd[1714]: 2024-07-02 00:24:52.591 [INFO][5216] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e63cc9da774e1419ab3822862ea1fa13fa3460cf4aaefc56d8b6e08609043f32" Namespace="calico-system" Pod="calico-kube-controllers-7967f9f766-kd24n" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-calico--kube--controllers--7967f9f766--kd24n-eth0" Jul 2 00:24:52.615037 containerd[1714]: 2024-07-02 00:24:52.591 [INFO][5216] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e63cc9da774e1419ab3822862ea1fa13fa3460cf4aaefc56d8b6e08609043f32" Namespace="calico-system" Pod="calico-kube-controllers-7967f9f766-kd24n" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-calico--kube--controllers--7967f9f766--kd24n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7b42818af6-k8s-calico--kube--controllers--7967f9f766--kd24n-eth0", GenerateName:"calico-kube-controllers-7967f9f766-", Namespace:"calico-system", SelfLink:"", UID:"08a53d11-75a7-4cca-8c69-c90223c69562", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7967f9f766", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7b42818af6", ContainerID:"e63cc9da774e1419ab3822862ea1fa13fa3460cf4aaefc56d8b6e08609043f32", Pod:"calico-kube-controllers-7967f9f766-kd24n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.30.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6394870b44e", MAC:"3a:14:5e:54:37:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:52.615037 containerd[1714]: 2024-07-02 00:24:52.611 [INFO][5216] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e63cc9da774e1419ab3822862ea1fa13fa3460cf4aaefc56d8b6e08609043f32" Namespace="calico-system" Pod="calico-kube-controllers-7967f9f766-kd24n" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-calico--kube--controllers--7967f9f766--kd24n-eth0" Jul 2 00:24:52.649401 containerd[1714]: time="2024-07-02T00:24:52.648716017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:52.649401 containerd[1714]: time="2024-07-02T00:24:52.648777718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:52.649401 containerd[1714]: time="2024-07-02T00:24:52.648809418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:52.649401 containerd[1714]: time="2024-07-02T00:24:52.648828219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:52.697596 systemd[1]: Started cri-containerd-e63cc9da774e1419ab3822862ea1fa13fa3460cf4aaefc56d8b6e08609043f32.scope - libcontainer container e63cc9da774e1419ab3822862ea1fa13fa3460cf4aaefc56d8b6e08609043f32. Jul 2 00:24:52.734981 containerd[1714]: time="2024-07-02T00:24:52.734940197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7967f9f766-kd24n,Uid:08a53d11-75a7-4cca-8c69-c90223c69562,Namespace:calico-system,Attempt:1,} returns sandbox id \"e63cc9da774e1419ab3822862ea1fa13fa3460cf4aaefc56d8b6e08609043f32\"" Jul 2 00:24:52.736938 containerd[1714]: time="2024-07-02T00:24:52.736406217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jul 2 00:24:53.844588 systemd-networkd[1354]: cali6394870b44e: Gained IPv6LL Jul 2 00:24:55.366101 containerd[1714]: time="2024-07-02T00:24:55.365519192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:55.370400 containerd[1714]: time="2024-07-02T00:24:55.370344858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jul 2 00:24:55.375476 containerd[1714]: time="2024-07-02T00:24:55.375416027Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:55.382103 containerd[1714]: time="2024-07-02T00:24:55.382064118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:55.383806 containerd[1714]: time="2024-07-02T00:24:55.383766141Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 2.646963218s" Jul 2 00:24:55.383901 containerd[1714]: time="2024-07-02T00:24:55.383812242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jul 2 00:24:55.403117 containerd[1714]: time="2024-07-02T00:24:55.402955904Z" level=info msg="CreateContainer within sandbox \"e63cc9da774e1419ab3822862ea1fa13fa3460cf4aaefc56d8b6e08609043f32\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 2 00:24:55.446609 containerd[1714]: time="2024-07-02T00:24:55.446567801Z" level=info msg="CreateContainer within sandbox \"e63cc9da774e1419ab3822862ea1fa13fa3460cf4aaefc56d8b6e08609043f32\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"33c584dc9b921e3b3cf4c236a13cbd5ac37d60c8e555421daafdb89591c90eef\"" Jul 2 00:24:55.447771 containerd[1714]: time="2024-07-02T00:24:55.447032907Z" level=info msg="StartContainer for \"33c584dc9b921e3b3cf4c236a13cbd5ac37d60c8e555421daafdb89591c90eef\"" Jul 2 00:24:55.481601 systemd[1]: Started cri-containerd-33c584dc9b921e3b3cf4c236a13cbd5ac37d60c8e555421daafdb89591c90eef.scope - libcontainer container 33c584dc9b921e3b3cf4c236a13cbd5ac37d60c8e555421daafdb89591c90eef. Jul 2 00:24:55.525781 containerd[1714]: time="2024-07-02T00:24:55.525718784Z" level=info msg="StartContainer for \"33c584dc9b921e3b3cf4c236a13cbd5ac37d60c8e555421daafdb89591c90eef\" returns successfully" Jul 2 00:24:55.761523 kubelet[3245]: I0702 00:24:55.760880 3245 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7967f9f766-kd24n" podStartSLOduration=93.112733266 podStartE2EDuration="1m35.7608121s" podCreationTimestamp="2024-07-02 00:23:20 +0000 UTC" firstStartedPulling="2024-07-02 00:24:52.736160914 +0000 UTC m=+112.800781304" lastFinishedPulling="2024-07-02 00:24:55.384239748 +0000 UTC m=+115.448860138" observedRunningTime="2024-07-02 00:24:55.711986832 +0000 UTC m=+115.776607222" watchObservedRunningTime="2024-07-02 00:24:55.7608121 +0000 UTC m=+115.825432490" Jul 2 00:24:59.928361 kubelet[3245]: I0702 00:24:59.928315 3245 topology_manager.go:215] "Topology Admit Handler" podUID="d7cfa68f-534f-4b3c-91d6-40e048631b63" podNamespace="calico-apiserver" podName="calico-apiserver-57f5b486c7-ssg2b" Jul 2 00:24:59.939645 systemd[1]: Created slice kubepods-besteffort-podd7cfa68f_534f_4b3c_91d6_40e048631b63.slice - libcontainer container kubepods-besteffort-podd7cfa68f_534f_4b3c_91d6_40e048631b63.slice. Jul 2 00:24:59.958975 kubelet[3245]: I0702 00:24:59.958935 3245 topology_manager.go:215] "Topology Admit Handler" podUID="e4e3046c-97af-4f22-b372-1d9a8aaa80e2" podNamespace="calico-apiserver" podName="calico-apiserver-57f5b486c7-qhlvn" Jul 2 00:24:59.970019 systemd[1]: Created slice kubepods-besteffort-pode4e3046c_97af_4f22_b372_1d9a8aaa80e2.slice - libcontainer container kubepods-besteffort-pode4e3046c_97af_4f22_b372_1d9a8aaa80e2.slice. Jul 2 00:25:00.010399 kubelet[3245]: I0702 00:25:00.010258 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e4e3046c-97af-4f22-b372-1d9a8aaa80e2-calico-apiserver-certs\") pod \"calico-apiserver-57f5b486c7-qhlvn\" (UID: \"e4e3046c-97af-4f22-b372-1d9a8aaa80e2\") " pod="calico-apiserver/calico-apiserver-57f5b486c7-qhlvn" Jul 2 00:25:00.010605 kubelet[3245]: I0702 00:25:00.010479 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtf7h\" (UniqueName: \"kubernetes.io/projected/d7cfa68f-534f-4b3c-91d6-40e048631b63-kube-api-access-rtf7h\") pod \"calico-apiserver-57f5b486c7-ssg2b\" (UID: \"d7cfa68f-534f-4b3c-91d6-40e048631b63\") " pod="calico-apiserver/calico-apiserver-57f5b486c7-ssg2b" Jul 2 00:25:00.010605 kubelet[3245]: I0702 00:25:00.010565 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d7cfa68f-534f-4b3c-91d6-40e048631b63-calico-apiserver-certs\") pod \"calico-apiserver-57f5b486c7-ssg2b\" (UID: \"d7cfa68f-534f-4b3c-91d6-40e048631b63\") " pod="calico-apiserver/calico-apiserver-57f5b486c7-ssg2b" Jul 2 00:25:00.010721 kubelet[3245]: I0702 00:25:00.010648 3245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b8fd\" (UniqueName: \"kubernetes.io/projected/e4e3046c-97af-4f22-b372-1d9a8aaa80e2-kube-api-access-2b8fd\") pod \"calico-apiserver-57f5b486c7-qhlvn\" (UID: \"e4e3046c-97af-4f22-b372-1d9a8aaa80e2\") " pod="calico-apiserver/calico-apiserver-57f5b486c7-qhlvn" Jul 2 00:25:00.112325 kubelet[3245]: E0702 00:25:00.111962 3245 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 00:25:00.112325 kubelet[3245]: E0702 00:25:00.112055 3245 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4e3046c-97af-4f22-b372-1d9a8aaa80e2-calico-apiserver-certs podName:e4e3046c-97af-4f22-b372-1d9a8aaa80e2 nodeName:}" failed. No retries permitted until 2024-07-02 00:25:00.612032312 +0000 UTC m=+120.676652702 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/e4e3046c-97af-4f22-b372-1d9a8aaa80e2-calico-apiserver-certs") pod "calico-apiserver-57f5b486c7-qhlvn" (UID: "e4e3046c-97af-4f22-b372-1d9a8aaa80e2") : secret "calico-apiserver-certs" not found Jul 2 00:25:00.112325 kubelet[3245]: E0702 00:25:00.112113 3245 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 00:25:00.112325 kubelet[3245]: E0702 00:25:00.112153 3245 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7cfa68f-534f-4b3c-91d6-40e048631b63-calico-apiserver-certs podName:d7cfa68f-534f-4b3c-91d6-40e048631b63 nodeName:}" failed. No retries permitted until 2024-07-02 00:25:00.612140614 +0000 UTC m=+120.676760904 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/d7cfa68f-534f-4b3c-91d6-40e048631b63-calico-apiserver-certs") pod "calico-apiserver-57f5b486c7-ssg2b" (UID: "d7cfa68f-534f-4b3c-91d6-40e048631b63") : secret "calico-apiserver-certs" not found Jul 2 00:25:00.367186 containerd[1714]: time="2024-07-02T00:25:00.367140449Z" level=info msg="StopPodSandbox for \"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\"" Jul 2 00:25:00.433650 containerd[1714]: 2024-07-02 00:25:00.399 [WARNING][5412] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7b42818af6-k8s-csi--node--driver--46k9p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d83f546f-f0c6-4f7a-a190-1895a85550b7", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7b42818af6", ContainerID:"23eb603eff4bcef30442d735f6f9065846bfa0307e852eca8b25b8895567de64", Pod:"csi-node-driver-46k9p", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.30.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calic4ec20bb180", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:00.433650 containerd[1714]: 2024-07-02 00:25:00.400 [INFO][5412] k8s.go 608: Cleaning up netns ContainerID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" Jul 2 00:25:00.433650 containerd[1714]: 2024-07-02 00:25:00.400 [INFO][5412] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" iface="eth0" netns="" Jul 2 00:25:00.433650 containerd[1714]: 2024-07-02 00:25:00.400 [INFO][5412] k8s.go 615: Releasing IP address(es) ContainerID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" Jul 2 00:25:00.433650 containerd[1714]: 2024-07-02 00:25:00.400 [INFO][5412] utils.go 188: Calico CNI releasing IP address ContainerID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" Jul 2 00:25:00.433650 containerd[1714]: 2024-07-02 00:25:00.425 [INFO][5418] ipam_plugin.go 411: Releasing address using handleID ContainerID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" HandleID="k8s-pod-network.65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" Workload="ci--3975.1.1--a--7b42818af6-k8s-csi--node--driver--46k9p-eth0" Jul 2 00:25:00.433650 containerd[1714]: 2024-07-02 00:25:00.425 [INFO][5418] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:00.433650 containerd[1714]: 2024-07-02 00:25:00.425 [INFO][5418] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:00.433650 containerd[1714]: 2024-07-02 00:25:00.430 [WARNING][5418] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" HandleID="k8s-pod-network.65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" Workload="ci--3975.1.1--a--7b42818af6-k8s-csi--node--driver--46k9p-eth0" Jul 2 00:25:00.433650 containerd[1714]: 2024-07-02 00:25:00.430 [INFO][5418] ipam_plugin.go 439: Releasing address using workloadID ContainerID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" HandleID="k8s-pod-network.65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" Workload="ci--3975.1.1--a--7b42818af6-k8s-csi--node--driver--46k9p-eth0" Jul 2 00:25:00.433650 containerd[1714]: 2024-07-02 00:25:00.431 [INFO][5418] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:00.433650 containerd[1714]: 2024-07-02 00:25:00.432 [INFO][5412] k8s.go 621: Teardown processing complete. ContainerID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" Jul 2 00:25:00.434302 containerd[1714]: time="2024-07-02T00:25:00.433770972Z" level=info msg="TearDown network for sandbox \"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\" successfully" Jul 2 00:25:00.434302 containerd[1714]: time="2024-07-02T00:25:00.433804173Z" level=info msg="StopPodSandbox for \"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\" returns successfully" Jul 2 00:25:00.434554 containerd[1714]: time="2024-07-02T00:25:00.434509783Z" level=info msg="RemovePodSandbox for \"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\"" Jul 2 00:25:00.434627 containerd[1714]: time="2024-07-02T00:25:00.434560083Z" level=info msg="Forcibly stopping sandbox \"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\"" Jul 2 00:25:00.495662 containerd[1714]: 2024-07-02 00:25:00.468 [WARNING][5436] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7b42818af6-k8s-csi--node--driver--46k9p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d83f546f-f0c6-4f7a-a190-1895a85550b7", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7b42818af6", ContainerID:"23eb603eff4bcef30442d735f6f9065846bfa0307e852eca8b25b8895567de64", Pod:"csi-node-driver-46k9p", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.30.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calic4ec20bb180", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:00.495662 containerd[1714]: 2024-07-02 00:25:00.468 [INFO][5436] k8s.go 608: Cleaning up netns ContainerID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" Jul 2 00:25:00.495662 containerd[1714]: 2024-07-02 00:25:00.468 [INFO][5436] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" iface="eth0" netns="" Jul 2 00:25:00.495662 containerd[1714]: 2024-07-02 00:25:00.468 [INFO][5436] k8s.go 615: Releasing IP address(es) ContainerID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" Jul 2 00:25:00.495662 containerd[1714]: 2024-07-02 00:25:00.468 [INFO][5436] utils.go 188: Calico CNI releasing IP address ContainerID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" Jul 2 00:25:00.495662 containerd[1714]: 2024-07-02 00:25:00.487 [INFO][5442] ipam_plugin.go 411: Releasing address using handleID ContainerID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" HandleID="k8s-pod-network.65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" Workload="ci--3975.1.1--a--7b42818af6-k8s-csi--node--driver--46k9p-eth0" Jul 2 00:25:00.495662 containerd[1714]: 2024-07-02 00:25:00.487 [INFO][5442] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:00.495662 containerd[1714]: 2024-07-02 00:25:00.487 [INFO][5442] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:00.495662 containerd[1714]: 2024-07-02 00:25:00.492 [WARNING][5442] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" HandleID="k8s-pod-network.65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" Workload="ci--3975.1.1--a--7b42818af6-k8s-csi--node--driver--46k9p-eth0" Jul 2 00:25:00.495662 containerd[1714]: 2024-07-02 00:25:00.492 [INFO][5442] ipam_plugin.go 439: Releasing address using workloadID ContainerID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" HandleID="k8s-pod-network.65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" Workload="ci--3975.1.1--a--7b42818af6-k8s-csi--node--driver--46k9p-eth0" Jul 2 00:25:00.495662 containerd[1714]: 2024-07-02 00:25:00.493 [INFO][5442] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:00.495662 containerd[1714]: 2024-07-02 00:25:00.494 [INFO][5436] k8s.go 621: Teardown processing complete. ContainerID="65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c" Jul 2 00:25:00.496475 containerd[1714]: time="2024-07-02T00:25:00.495701131Z" level=info msg="TearDown network for sandbox \"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\" successfully" Jul 2 00:25:00.513080 containerd[1714]: time="2024-07-02T00:25:00.512517164Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:25:00.513080 containerd[1714]: time="2024-07-02T00:25:00.512606565Z" level=info msg="RemovePodSandbox \"65deb4225fd6796533b67c4e2a9fc4d0223be97245cdd05f05089e2cdc3c9e1c\" returns successfully" Jul 2 00:25:00.514327 containerd[1714]: time="2024-07-02T00:25:00.513825882Z" level=info msg="StopPodSandbox for \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\"" Jul 2 00:25:00.581169 containerd[1714]: 2024-07-02 00:25:00.547 [WARNING][5460] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--24jmp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fdca042f-71be-426a-9e20-51906df4946a", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7b42818af6", ContainerID:"7bf6fcf4b3637b4584cefb16efcc1b07b5c04491f4d0128660caff56c5c22bfd", Pod:"coredns-76f75df574-24jmp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid76f4af802b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:00.581169 containerd[1714]: 2024-07-02 00:25:00.548 [INFO][5460] k8s.go 608: Cleaning up netns ContainerID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" Jul 2 00:25:00.581169 containerd[1714]: 2024-07-02 00:25:00.548 [INFO][5460] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" iface="eth0" netns="" Jul 2 00:25:00.581169 containerd[1714]: 2024-07-02 00:25:00.548 [INFO][5460] k8s.go 615: Releasing IP address(es) ContainerID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" Jul 2 00:25:00.581169 containerd[1714]: 2024-07-02 00:25:00.548 [INFO][5460] utils.go 188: Calico CNI releasing IP address ContainerID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" Jul 2 00:25:00.581169 containerd[1714]: 2024-07-02 00:25:00.572 [INFO][5467] ipam_plugin.go 411: Releasing address using handleID ContainerID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" HandleID="k8s-pod-network.52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" Workload="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--24jmp-eth0" Jul 2 00:25:00.581169 containerd[1714]: 2024-07-02 00:25:00.572 [INFO][5467] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:00.581169 containerd[1714]: 2024-07-02 00:25:00.572 [INFO][5467] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:00.581169 containerd[1714]: 2024-07-02 00:25:00.577 [WARNING][5467] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" HandleID="k8s-pod-network.52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" Workload="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--24jmp-eth0" Jul 2 00:25:00.581169 containerd[1714]: 2024-07-02 00:25:00.577 [INFO][5467] ipam_plugin.go 439: Releasing address using workloadID ContainerID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" HandleID="k8s-pod-network.52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" Workload="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--24jmp-eth0" Jul 2 00:25:00.581169 containerd[1714]: 2024-07-02 00:25:00.579 [INFO][5467] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:00.581169 containerd[1714]: 2024-07-02 00:25:00.580 [INFO][5460] k8s.go 621: Teardown processing complete. ContainerID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" Jul 2 00:25:00.582248 containerd[1714]: time="2024-07-02T00:25:00.581213416Z" level=info msg="TearDown network for sandbox \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\" successfully" Jul 2 00:25:00.582248 containerd[1714]: time="2024-07-02T00:25:00.581251117Z" level=info msg="StopPodSandbox for \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\" returns successfully" Jul 2 00:25:00.582248 containerd[1714]: time="2024-07-02T00:25:00.582042428Z" level=info msg="RemovePodSandbox for \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\"" Jul 2 00:25:00.582248 containerd[1714]: time="2024-07-02T00:25:00.582116129Z" level=info msg="Forcibly stopping sandbox \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\"" Jul 2 00:25:00.691751 containerd[1714]: 2024-07-02 00:25:00.617 [WARNING][5485] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--24jmp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fdca042f-71be-426a-9e20-51906df4946a", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7b42818af6", ContainerID:"7bf6fcf4b3637b4584cefb16efcc1b07b5c04491f4d0128660caff56c5c22bfd", Pod:"coredns-76f75df574-24jmp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid76f4af802b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:00.691751 containerd[1714]: 2024-07-02 00:25:00.618 [INFO][5485] k8s.go 608: Cleaning up netns ContainerID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" Jul 2 00:25:00.691751 containerd[1714]: 2024-07-02 00:25:00.618 [INFO][5485] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" iface="eth0" netns="" Jul 2 00:25:00.691751 containerd[1714]: 2024-07-02 00:25:00.618 [INFO][5485] k8s.go 615: Releasing IP address(es) ContainerID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" Jul 2 00:25:00.691751 containerd[1714]: 2024-07-02 00:25:00.618 [INFO][5485] utils.go 188: Calico CNI releasing IP address ContainerID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" Jul 2 00:25:00.691751 containerd[1714]: 2024-07-02 00:25:00.669 [INFO][5493] ipam_plugin.go 411: Releasing address using handleID ContainerID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" HandleID="k8s-pod-network.52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" Workload="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--24jmp-eth0" Jul 2 00:25:00.691751 containerd[1714]: 2024-07-02 00:25:00.669 [INFO][5493] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:00.691751 containerd[1714]: 2024-07-02 00:25:00.669 [INFO][5493] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:00.691751 containerd[1714]: 2024-07-02 00:25:00.684 [WARNING][5493] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" HandleID="k8s-pod-network.52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" Workload="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--24jmp-eth0" Jul 2 00:25:00.691751 containerd[1714]: 2024-07-02 00:25:00.684 [INFO][5493] ipam_plugin.go 439: Releasing address using workloadID ContainerID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" HandleID="k8s-pod-network.52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" Workload="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--24jmp-eth0" Jul 2 00:25:00.691751 containerd[1714]: 2024-07-02 00:25:00.686 [INFO][5493] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:00.691751 containerd[1714]: 2024-07-02 00:25:00.689 [INFO][5485] k8s.go 621: Teardown processing complete. ContainerID="52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574" Jul 2 00:25:00.692794 containerd[1714]: time="2024-07-02T00:25:00.691660547Z" level=info msg="TearDown network for sandbox \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\" successfully" Jul 2 00:25:00.706359 containerd[1714]: time="2024-07-02T00:25:00.706308950Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:25:00.706554 containerd[1714]: time="2024-07-02T00:25:00.706386851Z" level=info msg="RemovePodSandbox \"52e7aa36aadf9df8444315eb316c71b36df29e6726a82263a4735c386dc5a574\" returns successfully" Jul 2 00:25:00.706949 containerd[1714]: time="2024-07-02T00:25:00.706909259Z" level=info msg="StopPodSandbox for \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\"" Jul 2 00:25:00.767057 containerd[1714]: 2024-07-02 00:25:00.737 [WARNING][5511] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7b42818af6-k8s-calico--kube--controllers--7967f9f766--kd24n-eth0", GenerateName:"calico-kube-controllers-7967f9f766-", Namespace:"calico-system", SelfLink:"", UID:"08a53d11-75a7-4cca-8c69-c90223c69562", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7967f9f766", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7b42818af6", ContainerID:"e63cc9da774e1419ab3822862ea1fa13fa3460cf4aaefc56d8b6e08609043f32", Pod:"calico-kube-controllers-7967f9f766-kd24n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.30.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6394870b44e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:00.767057 containerd[1714]: 2024-07-02 00:25:00.737 [INFO][5511] k8s.go 608: Cleaning up netns ContainerID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" Jul 2 00:25:00.767057 containerd[1714]: 2024-07-02 00:25:00.737 [INFO][5511] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" iface="eth0" netns="" Jul 2 00:25:00.767057 containerd[1714]: 2024-07-02 00:25:00.737 [INFO][5511] k8s.go 615: Releasing IP address(es) ContainerID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" Jul 2 00:25:00.767057 containerd[1714]: 2024-07-02 00:25:00.737 [INFO][5511] utils.go 188: Calico CNI releasing IP address ContainerID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" Jul 2 00:25:00.767057 containerd[1714]: 2024-07-02 00:25:00.759 [INFO][5517] ipam_plugin.go 411: Releasing address using handleID ContainerID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" HandleID="k8s-pod-network.834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" Workload="ci--3975.1.1--a--7b42818af6-k8s-calico--kube--controllers--7967f9f766--kd24n-eth0" Jul 2 00:25:00.767057 containerd[1714]: 2024-07-02 00:25:00.759 [INFO][5517] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:00.767057 containerd[1714]: 2024-07-02 00:25:00.759 [INFO][5517] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:00.767057 containerd[1714]: 2024-07-02 00:25:00.763 [WARNING][5517] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" HandleID="k8s-pod-network.834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" Workload="ci--3975.1.1--a--7b42818af6-k8s-calico--kube--controllers--7967f9f766--kd24n-eth0" Jul 2 00:25:00.767057 containerd[1714]: 2024-07-02 00:25:00.764 [INFO][5517] ipam_plugin.go 439: Releasing address using workloadID ContainerID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" HandleID="k8s-pod-network.834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" Workload="ci--3975.1.1--a--7b42818af6-k8s-calico--kube--controllers--7967f9f766--kd24n-eth0" Jul 2 00:25:00.767057 containerd[1714]: 2024-07-02 00:25:00.765 [INFO][5517] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:00.767057 containerd[1714]: 2024-07-02 00:25:00.766 [INFO][5511] k8s.go 621: Teardown processing complete. ContainerID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" Jul 2 00:25:00.768032 containerd[1714]: time="2024-07-02T00:25:00.767079693Z" level=info msg="TearDown network for sandbox \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\" successfully" Jul 2 00:25:00.768032 containerd[1714]: time="2024-07-02T00:25:00.767111493Z" level=info msg="StopPodSandbox for \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\" returns successfully" Jul 2 00:25:00.768032 containerd[1714]: time="2024-07-02T00:25:00.767825703Z" level=info msg="RemovePodSandbox for \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\"" Jul 2 00:25:00.768032 containerd[1714]: time="2024-07-02T00:25:00.767861404Z" level=info msg="Forcibly stopping sandbox \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\"" Jul 2 00:25:00.833045 containerd[1714]: 2024-07-02 00:25:00.802 [WARNING][5535] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7b42818af6-k8s-calico--kube--controllers--7967f9f766--kd24n-eth0", GenerateName:"calico-kube-controllers-7967f9f766-", Namespace:"calico-system", SelfLink:"", UID:"08a53d11-75a7-4cca-8c69-c90223c69562", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7967f9f766", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7b42818af6", ContainerID:"e63cc9da774e1419ab3822862ea1fa13fa3460cf4aaefc56d8b6e08609043f32", Pod:"calico-kube-controllers-7967f9f766-kd24n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.30.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6394870b44e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:00.833045 containerd[1714]: 2024-07-02 00:25:00.802 [INFO][5535] k8s.go 608: Cleaning up netns ContainerID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" Jul 2 00:25:00.833045 containerd[1714]: 2024-07-02 00:25:00.802 [INFO][5535] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" iface="eth0" netns="" Jul 2 00:25:00.833045 containerd[1714]: 2024-07-02 00:25:00.802 [INFO][5535] k8s.go 615: Releasing IP address(es) ContainerID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" Jul 2 00:25:00.833045 containerd[1714]: 2024-07-02 00:25:00.803 [INFO][5535] utils.go 188: Calico CNI releasing IP address ContainerID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" Jul 2 00:25:00.833045 containerd[1714]: 2024-07-02 00:25:00.823 [INFO][5541] ipam_plugin.go 411: Releasing address using handleID ContainerID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" HandleID="k8s-pod-network.834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" Workload="ci--3975.1.1--a--7b42818af6-k8s-calico--kube--controllers--7967f9f766--kd24n-eth0" Jul 2 00:25:00.833045 containerd[1714]: 2024-07-02 00:25:00.823 [INFO][5541] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:00.833045 containerd[1714]: 2024-07-02 00:25:00.823 [INFO][5541] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:00.833045 containerd[1714]: 2024-07-02 00:25:00.828 [WARNING][5541] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" HandleID="k8s-pod-network.834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" Workload="ci--3975.1.1--a--7b42818af6-k8s-calico--kube--controllers--7967f9f766--kd24n-eth0" Jul 2 00:25:00.833045 containerd[1714]: 2024-07-02 00:25:00.828 [INFO][5541] ipam_plugin.go 439: Releasing address using workloadID ContainerID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" HandleID="k8s-pod-network.834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" Workload="ci--3975.1.1--a--7b42818af6-k8s-calico--kube--controllers--7967f9f766--kd24n-eth0" Jul 2 00:25:00.833045 containerd[1714]: 2024-07-02 00:25:00.830 [INFO][5541] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:00.833045 containerd[1714]: 2024-07-02 00:25:00.831 [INFO][5535] k8s.go 621: Teardown processing complete. ContainerID="834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee" Jul 2 00:25:00.833709 containerd[1714]: time="2024-07-02T00:25:00.833082308Z" level=info msg="TearDown network for sandbox \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\" successfully" Jul 2 00:25:00.840616 containerd[1714]: time="2024-07-02T00:25:00.840573912Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:25:00.840738 containerd[1714]: time="2024-07-02T00:25:00.840648113Z" level=info msg="RemovePodSandbox \"834133a3cc0e8ae409b54095ec4fe7a5d9a60e4a73d66dabc551a704cb51e5ee\" returns successfully" Jul 2 00:25:00.841308 containerd[1714]: time="2024-07-02T00:25:00.841209521Z" level=info msg="StopPodSandbox for \"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\"" Jul 2 00:25:00.846086 containerd[1714]: time="2024-07-02T00:25:00.846029687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f5b486c7-ssg2b,Uid:d7cfa68f-534f-4b3c-91d6-40e048631b63,Namespace:calico-apiserver,Attempt:0,}" Jul 2 00:25:00.881574 containerd[1714]: time="2024-07-02T00:25:00.880641667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f5b486c7-qhlvn,Uid:e4e3046c-97af-4f22-b372-1d9a8aaa80e2,Namespace:calico-apiserver,Attempt:0,}" Jul 2 00:25:00.938980 containerd[1714]: 2024-07-02 00:25:00.888 [WARNING][5559] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--zcrs9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"af167feb-cfad-401b-a59b-ba2e1b1a7b28", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7b42818af6", ContainerID:"1a82ad727cf2a79004cc6f24bb515a4399a0e48bd8877859c7b44d8e140f40ec", Pod:"coredns-76f75df574-zcrs9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic4b2c7274dd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:00.938980 containerd[1714]: 2024-07-02 00:25:00.888 [INFO][5559] k8s.go 608: Cleaning up netns ContainerID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" Jul 2 00:25:00.938980 containerd[1714]: 2024-07-02 00:25:00.888 [INFO][5559] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" iface="eth0" netns="" Jul 2 00:25:00.938980 containerd[1714]: 2024-07-02 00:25:00.888 [INFO][5559] k8s.go 615: Releasing IP address(es) ContainerID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" Jul 2 00:25:00.938980 containerd[1714]: 2024-07-02 00:25:00.889 [INFO][5559] utils.go 188: Calico CNI releasing IP address ContainerID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" Jul 2 00:25:00.938980 containerd[1714]: 2024-07-02 00:25:00.923 [INFO][5566] ipam_plugin.go 411: Releasing address using handleID ContainerID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" HandleID="k8s-pod-network.0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" Workload="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--zcrs9-eth0" Jul 2 00:25:00.938980 containerd[1714]: 2024-07-02 00:25:00.923 [INFO][5566] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:00.938980 containerd[1714]: 2024-07-02 00:25:00.923 [INFO][5566] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:00.938980 containerd[1714]: 2024-07-02 00:25:00.931 [WARNING][5566] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" HandleID="k8s-pod-network.0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" Workload="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--zcrs9-eth0" Jul 2 00:25:00.938980 containerd[1714]: 2024-07-02 00:25:00.931 [INFO][5566] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" HandleID="k8s-pod-network.0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" Workload="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--zcrs9-eth0" Jul 2 00:25:00.938980 containerd[1714]: 2024-07-02 00:25:00.934 [INFO][5566] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:00.938980 containerd[1714]: 2024-07-02 00:25:00.936 [INFO][5559] k8s.go 621: Teardown processing complete. ContainerID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" Jul 2 00:25:00.942528 containerd[1714]: time="2024-07-02T00:25:00.942413923Z" level=info msg="TearDown network for sandbox \"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\" successfully" Jul 2 00:25:00.943648 containerd[1714]: time="2024-07-02T00:25:00.942654527Z" level=info msg="StopPodSandbox for \"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\" returns successfully" Jul 2 00:25:00.944558 containerd[1714]: time="2024-07-02T00:25:00.943959145Z" level=info msg="RemovePodSandbox for \"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\"" Jul 2 00:25:00.944558 containerd[1714]: time="2024-07-02T00:25:00.943996045Z" level=info msg="Forcibly stopping sandbox \"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\"" Jul 2 00:25:01.111132 systemd-networkd[1354]: cali9f51a13add4: Link UP Jul 2 00:25:01.113991 systemd-networkd[1354]: cali9f51a13add4: Gained carrier Jul 2 00:25:01.143400 containerd[1714]: 2024-07-02 00:25:01.021 [WARNING][5607] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--zcrs9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"af167feb-cfad-401b-a59b-ba2e1b1a7b28", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7b42818af6", ContainerID:"1a82ad727cf2a79004cc6f24bb515a4399a0e48bd8877859c7b44d8e140f40ec", Pod:"coredns-76f75df574-zcrs9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic4b2c7274dd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:01.143400 containerd[1714]: 2024-07-02 00:25:01.022 [INFO][5607] k8s.go 608: Cleaning up netns ContainerID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" Jul 2 00:25:01.143400 containerd[1714]: 2024-07-02 00:25:01.022 [INFO][5607] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" iface="eth0" netns="" Jul 2 00:25:01.143400 containerd[1714]: 2024-07-02 00:25:01.022 [INFO][5607] k8s.go 615: Releasing IP address(es) ContainerID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" Jul 2 00:25:01.143400 containerd[1714]: 2024-07-02 00:25:01.022 [INFO][5607] utils.go 188: Calico CNI releasing IP address ContainerID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" Jul 2 00:25:01.143400 containerd[1714]: 2024-07-02 00:25:01.073 [INFO][5621] ipam_plugin.go 411: Releasing address using handleID ContainerID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" HandleID="k8s-pod-network.0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" Workload="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--zcrs9-eth0" Jul 2 00:25:01.143400 containerd[1714]: 2024-07-02 00:25:01.073 [INFO][5621] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:01.143400 containerd[1714]: 2024-07-02 00:25:01.100 [INFO][5621] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:01.143400 containerd[1714]: 2024-07-02 00:25:01.115 [WARNING][5621] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" HandleID="k8s-pod-network.0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" Workload="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--zcrs9-eth0" Jul 2 00:25:01.143400 containerd[1714]: 2024-07-02 00:25:01.115 [INFO][5621] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" HandleID="k8s-pod-network.0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" Workload="ci--3975.1.1--a--7b42818af6-k8s-coredns--76f75df574--zcrs9-eth0" Jul 2 00:25:01.143400 containerd[1714]: 2024-07-02 00:25:01.117 [INFO][5621] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:01.143400 containerd[1714]: 2024-07-02 00:25:01.133 [INFO][5607] k8s.go 621: Teardown processing complete. ContainerID="0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d" Jul 2 00:25:01.145249 containerd[1714]: time="2024-07-02T00:25:01.143355009Z" level=info msg="TearDown network for sandbox \"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\" successfully" Jul 2 00:25:01.146723 containerd[1714]: 2024-07-02 00:25:00.961 [INFO][5570] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--a--7b42818af6-k8s-calico--apiserver--57f5b486c7--ssg2b-eth0 calico-apiserver-57f5b486c7- calico-apiserver d7cfa68f-534f-4b3c-91d6-40e048631b63 947 0 2024-07-02 00:24:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57f5b486c7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975.1.1-a-7b42818af6 calico-apiserver-57f5b486c7-ssg2b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9f51a13add4 [] []}} ContainerID="65ac942ac6574eaa25d839c3b748348c0001149715aa068c3e2f41610ad005b4" Namespace="calico-apiserver" Pod="calico-apiserver-57f5b486c7-ssg2b" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-calico--apiserver--57f5b486c7--ssg2b-" Jul 2 00:25:01.146723 containerd[1714]: 2024-07-02 00:25:00.962 [INFO][5570] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="65ac942ac6574eaa25d839c3b748348c0001149715aa068c3e2f41610ad005b4" Namespace="calico-apiserver" Pod="calico-apiserver-57f5b486c7-ssg2b" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-calico--apiserver--57f5b486c7--ssg2b-eth0" Jul 2 00:25:01.146723 containerd[1714]: 2024-07-02 00:25:01.033 [INFO][5612] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="65ac942ac6574eaa25d839c3b748348c0001149715aa068c3e2f41610ad005b4" HandleID="k8s-pod-network.65ac942ac6574eaa25d839c3b748348c0001149715aa068c3e2f41610ad005b4" Workload="ci--3975.1.1--a--7b42818af6-k8s-calico--apiserver--57f5b486c7--ssg2b-eth0" Jul 2 00:25:01.146723 containerd[1714]: 2024-07-02 00:25:01.048 [INFO][5612] ipam_plugin.go 264: Auto assigning IP ContainerID="65ac942ac6574eaa25d839c3b748348c0001149715aa068c3e2f41610ad005b4" HandleID="k8s-pod-network.65ac942ac6574eaa25d839c3b748348c0001149715aa068c3e2f41610ad005b4" Workload="ci--3975.1.1--a--7b42818af6-k8s-calico--apiserver--57f5b486c7--ssg2b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051e90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975.1.1-a-7b42818af6", "pod":"calico-apiserver-57f5b486c7-ssg2b", "timestamp":"2024-07-02 00:25:01.033174782 +0000 UTC"}, Hostname:"ci-3975.1.1-a-7b42818af6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:25:01.146723 containerd[1714]: 2024-07-02 00:25:01.048 [INFO][5612] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:01.146723 containerd[1714]: 2024-07-02 00:25:01.048 [INFO][5612] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:01.146723 containerd[1714]: 2024-07-02 00:25:01.048 [INFO][5612] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-a-7b42818af6' Jul 2 00:25:01.146723 containerd[1714]: 2024-07-02 00:25:01.053 [INFO][5612] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.65ac942ac6574eaa25d839c3b748348c0001149715aa068c3e2f41610ad005b4" host="ci-3975.1.1-a-7b42818af6" Jul 2 00:25:01.146723 containerd[1714]: 2024-07-02 00:25:01.067 [INFO][5612] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-a-7b42818af6" Jul 2 00:25:01.146723 containerd[1714]: 2024-07-02 00:25:01.078 [INFO][5612] ipam.go 489: Trying affinity for 192.168.30.64/26 host="ci-3975.1.1-a-7b42818af6" Jul 2 00:25:01.146723 containerd[1714]: 2024-07-02 00:25:01.081 [INFO][5612] ipam.go 155: Attempting to load block cidr=192.168.30.64/26 host="ci-3975.1.1-a-7b42818af6" Jul 2 00:25:01.146723 containerd[1714]: 2024-07-02 00:25:01.084 [INFO][5612] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.30.64/26 host="ci-3975.1.1-a-7b42818af6" Jul 2 00:25:01.146723 containerd[1714]: 2024-07-02 00:25:01.084 [INFO][5612] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.30.64/26 handle="k8s-pod-network.65ac942ac6574eaa25d839c3b748348c0001149715aa068c3e2f41610ad005b4" host="ci-3975.1.1-a-7b42818af6" Jul 2 00:25:01.146723 containerd[1714]: 2024-07-02 00:25:01.086 [INFO][5612] ipam.go 1685: Creating new handle: k8s-pod-network.65ac942ac6574eaa25d839c3b748348c0001149715aa068c3e2f41610ad005b4 Jul 2 00:25:01.146723 containerd[1714]: 2024-07-02 00:25:01.092 [INFO][5612] ipam.go 1203: Writing block in order to claim IPs block=192.168.30.64/26 handle="k8s-pod-network.65ac942ac6574eaa25d839c3b748348c0001149715aa068c3e2f41610ad005b4" host="ci-3975.1.1-a-7b42818af6" Jul 2 00:25:01.146723 containerd[1714]: 2024-07-02 00:25:01.099 [INFO][5612] ipam.go 1216: Successfully claimed IPs: [192.168.30.69/26] block=192.168.30.64/26 handle="k8s-pod-network.65ac942ac6574eaa25d839c3b748348c0001149715aa068c3e2f41610ad005b4" host="ci-3975.1.1-a-7b42818af6" Jul 2 00:25:01.146723 containerd[1714]: 2024-07-02 00:25:01.099 [INFO][5612] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.30.69/26] handle="k8s-pod-network.65ac942ac6574eaa25d839c3b748348c0001149715aa068c3e2f41610ad005b4" host="ci-3975.1.1-a-7b42818af6" Jul 2 00:25:01.146723 containerd[1714]: 2024-07-02 00:25:01.099 [INFO][5612] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:01.146723 containerd[1714]: 2024-07-02 00:25:01.099 [INFO][5612] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.30.69/26] IPv6=[] ContainerID="65ac942ac6574eaa25d839c3b748348c0001149715aa068c3e2f41610ad005b4" HandleID="k8s-pod-network.65ac942ac6574eaa25d839c3b748348c0001149715aa068c3e2f41610ad005b4" Workload="ci--3975.1.1--a--7b42818af6-k8s-calico--apiserver--57f5b486c7--ssg2b-eth0" Jul 2 00:25:01.147567 containerd[1714]: 2024-07-02 00:25:01.103 [INFO][5570] k8s.go 386: Populated endpoint ContainerID="65ac942ac6574eaa25d839c3b748348c0001149715aa068c3e2f41610ad005b4" Namespace="calico-apiserver" Pod="calico-apiserver-57f5b486c7-ssg2b" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-calico--apiserver--57f5b486c7--ssg2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7b42818af6-k8s-calico--apiserver--57f5b486c7--ssg2b-eth0", GenerateName:"calico-apiserver-57f5b486c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"d7cfa68f-534f-4b3c-91d6-40e048631b63", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57f5b486c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7b42818af6", ContainerID:"", Pod:"calico-apiserver-57f5b486c7-ssg2b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9f51a13add4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:01.147567 containerd[1714]: 2024-07-02 00:25:01.104 [INFO][5570] k8s.go 387: Calico CNI using IPs: [192.168.30.69/32] ContainerID="65ac942ac6574eaa25d839c3b748348c0001149715aa068c3e2f41610ad005b4" Namespace="calico-apiserver" Pod="calico-apiserver-57f5b486c7-ssg2b" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-calico--apiserver--57f5b486c7--ssg2b-eth0" Jul 2 00:25:01.147567 containerd[1714]: 2024-07-02 00:25:01.104 [INFO][5570] dataplane_linux.go 68: Setting the host side veth name to cali9f51a13add4 ContainerID="65ac942ac6574eaa25d839c3b748348c0001149715aa068c3e2f41610ad005b4" Namespace="calico-apiserver" Pod="calico-apiserver-57f5b486c7-ssg2b" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-calico--apiserver--57f5b486c7--ssg2b-eth0" Jul 2 00:25:01.147567 containerd[1714]: 2024-07-02 00:25:01.112 [INFO][5570] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="65ac942ac6574eaa25d839c3b748348c0001149715aa068c3e2f41610ad005b4" Namespace="calico-apiserver" Pod="calico-apiserver-57f5b486c7-ssg2b" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-calico--apiserver--57f5b486c7--ssg2b-eth0" Jul 2 00:25:01.147567 containerd[1714]: 2024-07-02 00:25:01.113 [INFO][5570] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="65ac942ac6574eaa25d839c3b748348c0001149715aa068c3e2f41610ad005b4" Namespace="calico-apiserver" Pod="calico-apiserver-57f5b486c7-ssg2b" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-calico--apiserver--57f5b486c7--ssg2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7b42818af6-k8s-calico--apiserver--57f5b486c7--ssg2b-eth0", GenerateName:"calico-apiserver-57f5b486c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"d7cfa68f-534f-4b3c-91d6-40e048631b63", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57f5b486c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7b42818af6", ContainerID:"65ac942ac6574eaa25d839c3b748348c0001149715aa068c3e2f41610ad005b4", Pod:"calico-apiserver-57f5b486c7-ssg2b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9f51a13add4", MAC:"76:4e:a7:2b:de:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:01.147567 containerd[1714]: 2024-07-02 00:25:01.135 [INFO][5570] k8s.go 500: Wrote updated endpoint to datastore ContainerID="65ac942ac6574eaa25d839c3b748348c0001149715aa068c3e2f41610ad005b4" Namespace="calico-apiserver" Pod="calico-apiserver-57f5b486c7-ssg2b" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-calico--apiserver--57f5b486c7--ssg2b-eth0" Jul 2 00:25:01.161235 containerd[1714]: time="2024-07-02T00:25:01.160504847Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:25:01.161235 containerd[1714]: time="2024-07-02T00:25:01.160585248Z" level=info msg="RemovePodSandbox \"0a68ade952c3f5228fc88c6927163a1147fbb9097b54b97a3b7064114d49b40d\" returns successfully" Jul 2 00:25:01.201031 systemd-networkd[1354]: calida96cf3fc46: Link UP Jul 2 00:25:01.204542 systemd-networkd[1354]: calida96cf3fc46: Gained carrier Jul 2 00:25:01.215062 containerd[1714]: time="2024-07-02T00:25:01.214652497Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:25:01.215062 containerd[1714]: time="2024-07-02T00:25:01.214717598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:01.215062 containerd[1714]: time="2024-07-02T00:25:01.214751499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:25:01.215062 containerd[1714]: time="2024-07-02T00:25:01.214770199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:01.252982 systemd[1]: run-containerd-runc-k8s.io-65ac942ac6574eaa25d839c3b748348c0001149715aa068c3e2f41610ad005b4-runc.OX3fco.mount: Deactivated successfully. Jul 2 00:25:01.255685 containerd[1714]: 2024-07-02 00:25:01.024 [INFO][5584] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--a--7b42818af6-k8s-calico--apiserver--57f5b486c7--qhlvn-eth0 calico-apiserver-57f5b486c7- calico-apiserver e4e3046c-97af-4f22-b372-1d9a8aaa80e2 955 0 2024-07-02 00:24:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57f5b486c7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975.1.1-a-7b42818af6 calico-apiserver-57f5b486c7-qhlvn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calida96cf3fc46 [] []}} ContainerID="8078a66f4e1ea9e8b64b376374a7616271040e9aac8cbecef3b5fe1e0ae15701" Namespace="calico-apiserver" Pod="calico-apiserver-57f5b486c7-qhlvn" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-calico--apiserver--57f5b486c7--qhlvn-" Jul 2 00:25:01.255685 containerd[1714]: 2024-07-02 00:25:01.024 [INFO][5584] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8078a66f4e1ea9e8b64b376374a7616271040e9aac8cbecef3b5fe1e0ae15701" Namespace="calico-apiserver" Pod="calico-apiserver-57f5b486c7-qhlvn" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-calico--apiserver--57f5b486c7--qhlvn-eth0" Jul 2 00:25:01.255685 containerd[1714]: 2024-07-02 00:25:01.091 [INFO][5627] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8078a66f4e1ea9e8b64b376374a7616271040e9aac8cbecef3b5fe1e0ae15701" HandleID="k8s-pod-network.8078a66f4e1ea9e8b64b376374a7616271040e9aac8cbecef3b5fe1e0ae15701" Workload="ci--3975.1.1--a--7b42818af6-k8s-calico--apiserver--57f5b486c7--qhlvn-eth0" Jul 2 00:25:01.255685 containerd[1714]: 2024-07-02 00:25:01.106 [INFO][5627] ipam_plugin.go 264: Auto assigning IP ContainerID="8078a66f4e1ea9e8b64b376374a7616271040e9aac8cbecef3b5fe1e0ae15701" HandleID="k8s-pod-network.8078a66f4e1ea9e8b64b376374a7616271040e9aac8cbecef3b5fe1e0ae15701" Workload="ci--3975.1.1--a--7b42818af6-k8s-calico--apiserver--57f5b486c7--qhlvn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318a80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975.1.1-a-7b42818af6", "pod":"calico-apiserver-57f5b486c7-qhlvn", "timestamp":"2024-07-02 00:25:01.091239587 +0000 UTC"}, Hostname:"ci-3975.1.1-a-7b42818af6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:25:01.255685 containerd[1714]: 2024-07-02 00:25:01.106 [INFO][5627] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:01.255685 containerd[1714]: 2024-07-02 00:25:01.119 [INFO][5627] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:01.255685 containerd[1714]: 2024-07-02 00:25:01.121 [INFO][5627] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-a-7b42818af6' Jul 2 00:25:01.255685 containerd[1714]: 2024-07-02 00:25:01.143 [INFO][5627] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8078a66f4e1ea9e8b64b376374a7616271040e9aac8cbecef3b5fe1e0ae15701" host="ci-3975.1.1-a-7b42818af6" Jul 2 00:25:01.255685 containerd[1714]: 2024-07-02 00:25:01.154 [INFO][5627] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-a-7b42818af6" Jul 2 00:25:01.255685 containerd[1714]: 2024-07-02 00:25:01.167 [INFO][5627] ipam.go 489: Trying affinity for 192.168.30.64/26 host="ci-3975.1.1-a-7b42818af6" Jul 2 00:25:01.255685 containerd[1714]: 2024-07-02 00:25:01.170 [INFO][5627] ipam.go 155: Attempting to load block cidr=192.168.30.64/26 host="ci-3975.1.1-a-7b42818af6" Jul 2 00:25:01.255685 containerd[1714]: 2024-07-02 00:25:01.174 [INFO][5627] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.30.64/26 host="ci-3975.1.1-a-7b42818af6" Jul 2 00:25:01.255685 containerd[1714]: 2024-07-02 00:25:01.174 [INFO][5627] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.30.64/26 handle="k8s-pod-network.8078a66f4e1ea9e8b64b376374a7616271040e9aac8cbecef3b5fe1e0ae15701" host="ci-3975.1.1-a-7b42818af6" Jul 2 00:25:01.255685 containerd[1714]: 2024-07-02 00:25:01.176 [INFO][5627] ipam.go 1685: Creating new handle: k8s-pod-network.8078a66f4e1ea9e8b64b376374a7616271040e9aac8cbecef3b5fe1e0ae15701 Jul 2 00:25:01.255685 containerd[1714]: 2024-07-02 00:25:01.183 [INFO][5627] ipam.go 1203: Writing block in order to claim IPs block=192.168.30.64/26 handle="k8s-pod-network.8078a66f4e1ea9e8b64b376374a7616271040e9aac8cbecef3b5fe1e0ae15701" host="ci-3975.1.1-a-7b42818af6" Jul 2 00:25:01.255685 containerd[1714]: 2024-07-02 00:25:01.193 [INFO][5627] ipam.go 1216: Successfully claimed IPs: [192.168.30.70/26] block=192.168.30.64/26 handle="k8s-pod-network.8078a66f4e1ea9e8b64b376374a7616271040e9aac8cbecef3b5fe1e0ae15701" host="ci-3975.1.1-a-7b42818af6" Jul 2 00:25:01.255685 containerd[1714]: 2024-07-02 00:25:01.193 [INFO][5627] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.30.70/26] handle="k8s-pod-network.8078a66f4e1ea9e8b64b376374a7616271040e9aac8cbecef3b5fe1e0ae15701" host="ci-3975.1.1-a-7b42818af6" Jul 2 00:25:01.255685 containerd[1714]: 2024-07-02 00:25:01.193 [INFO][5627] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:01.255685 containerd[1714]: 2024-07-02 00:25:01.194 [INFO][5627] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.30.70/26] IPv6=[] ContainerID="8078a66f4e1ea9e8b64b376374a7616271040e9aac8cbecef3b5fe1e0ae15701" HandleID="k8s-pod-network.8078a66f4e1ea9e8b64b376374a7616271040e9aac8cbecef3b5fe1e0ae15701" Workload="ci--3975.1.1--a--7b42818af6-k8s-calico--apiserver--57f5b486c7--qhlvn-eth0" Jul 2 00:25:01.256918 containerd[1714]: 2024-07-02 00:25:01.196 [INFO][5584] k8s.go 386: Populated endpoint ContainerID="8078a66f4e1ea9e8b64b376374a7616271040e9aac8cbecef3b5fe1e0ae15701" Namespace="calico-apiserver" Pod="calico-apiserver-57f5b486c7-qhlvn" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-calico--apiserver--57f5b486c7--qhlvn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7b42818af6-k8s-calico--apiserver--57f5b486c7--qhlvn-eth0", GenerateName:"calico-apiserver-57f5b486c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"e4e3046c-97af-4f22-b372-1d9a8aaa80e2", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57f5b486c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7b42818af6", ContainerID:"", Pod:"calico-apiserver-57f5b486c7-qhlvn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calida96cf3fc46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:01.256918 containerd[1714]: 2024-07-02 00:25:01.197 [INFO][5584] k8s.go 387: Calico CNI using IPs: [192.168.30.70/32] ContainerID="8078a66f4e1ea9e8b64b376374a7616271040e9aac8cbecef3b5fe1e0ae15701" Namespace="calico-apiserver" Pod="calico-apiserver-57f5b486c7-qhlvn" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-calico--apiserver--57f5b486c7--qhlvn-eth0" Jul 2 00:25:01.256918 containerd[1714]: 2024-07-02 00:25:01.197 [INFO][5584] dataplane_linux.go 68: Setting the host side veth name to calida96cf3fc46 ContainerID="8078a66f4e1ea9e8b64b376374a7616271040e9aac8cbecef3b5fe1e0ae15701" Namespace="calico-apiserver" Pod="calico-apiserver-57f5b486c7-qhlvn" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-calico--apiserver--57f5b486c7--qhlvn-eth0" Jul 2 00:25:01.256918 containerd[1714]: 2024-07-02 00:25:01.204 [INFO][5584] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8078a66f4e1ea9e8b64b376374a7616271040e9aac8cbecef3b5fe1e0ae15701" Namespace="calico-apiserver" Pod="calico-apiserver-57f5b486c7-qhlvn" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-calico--apiserver--57f5b486c7--qhlvn-eth0" Jul 2 00:25:01.256918 containerd[1714]: 2024-07-02 00:25:01.208 [INFO][5584] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8078a66f4e1ea9e8b64b376374a7616271040e9aac8cbecef3b5fe1e0ae15701" Namespace="calico-apiserver" Pod="calico-apiserver-57f5b486c7-qhlvn" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-calico--apiserver--57f5b486c7--qhlvn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7b42818af6-k8s-calico--apiserver--57f5b486c7--qhlvn-eth0", GenerateName:"calico-apiserver-57f5b486c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"e4e3046c-97af-4f22-b372-1d9a8aaa80e2", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57f5b486c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7b42818af6", ContainerID:"8078a66f4e1ea9e8b64b376374a7616271040e9aac8cbecef3b5fe1e0ae15701", Pod:"calico-apiserver-57f5b486c7-qhlvn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calida96cf3fc46", MAC:"22:cf:09:25:f0:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:01.256918 containerd[1714]: 2024-07-02 00:25:01.240 [INFO][5584] k8s.go 500: Wrote updated endpoint to datastore ContainerID="8078a66f4e1ea9e8b64b376374a7616271040e9aac8cbecef3b5fe1e0ae15701" Namespace="calico-apiserver" Pod="calico-apiserver-57f5b486c7-qhlvn" WorkloadEndpoint="ci--3975.1.1--a--7b42818af6-k8s-calico--apiserver--57f5b486c7--qhlvn-eth0" Jul 2 00:25:01.267630 systemd[1]: Started cri-containerd-65ac942ac6574eaa25d839c3b748348c0001149715aa068c3e2f41610ad005b4.scope - libcontainer container 65ac942ac6574eaa25d839c3b748348c0001149715aa068c3e2f41610ad005b4. Jul 2 00:25:01.310529 containerd[1714]: time="2024-07-02T00:25:01.310310224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:25:01.310529 containerd[1714]: time="2024-07-02T00:25:01.310393525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:01.312372 containerd[1714]: time="2024-07-02T00:25:01.312270951Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:25:01.312372 containerd[1714]: time="2024-07-02T00:25:01.312328052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:01.352714 systemd[1]: Started cri-containerd-8078a66f4e1ea9e8b64b376374a7616271040e9aac8cbecef3b5fe1e0ae15701.scope - libcontainer container 8078a66f4e1ea9e8b64b376374a7616271040e9aac8cbecef3b5fe1e0ae15701. Jul 2 00:25:01.368535 containerd[1714]: time="2024-07-02T00:25:01.368035724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f5b486c7-ssg2b,Uid:d7cfa68f-534f-4b3c-91d6-40e048631b63,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"65ac942ac6574eaa25d839c3b748348c0001149715aa068c3e2f41610ad005b4\"" Jul 2 00:25:01.371521 containerd[1714]: time="2024-07-02T00:25:01.370363456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 00:25:01.410193 containerd[1714]: time="2024-07-02T00:25:01.410138807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f5b486c7-qhlvn,Uid:e4e3046c-97af-4f22-b372-1d9a8aaa80e2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8078a66f4e1ea9e8b64b376374a7616271040e9aac8cbecef3b5fe1e0ae15701\"" Jul 2 00:25:02.504552 update_engine[1693]: I0702 00:25:02.504488 1693 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 00:25:02.505016 update_engine[1693]: I0702 00:25:02.504741 1693 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 00:25:02.505016 update_engine[1693]: I0702 00:25:02.504994 1693 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 00:25:02.534355 update_engine[1693]: E0702 00:25:02.534317 1693 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 00:25:02.534507 update_engine[1693]: I0702 00:25:02.534389 1693 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 2 00:25:02.740653 systemd-networkd[1354]: cali9f51a13add4: Gained IPv6LL Jul 2 00:25:02.804647 systemd-networkd[1354]: calida96cf3fc46: Gained IPv6LL Jul 2 00:25:04.008142 containerd[1714]: time="2024-07-02T00:25:04.008091022Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:04.011670 containerd[1714]: time="2024-07-02T00:25:04.011597171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jul 2 00:25:04.015562 containerd[1714]: time="2024-07-02T00:25:04.015427724Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:04.020576 containerd[1714]: time="2024-07-02T00:25:04.020506195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:04.021336 containerd[1714]: time="2024-07-02T00:25:04.021201704Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 2.650801748s" Jul 2 00:25:04.021336 containerd[1714]: time="2024-07-02T00:25:04.021239505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jul 2 00:25:04.022745 containerd[1714]: time="2024-07-02T00:25:04.022571623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 00:25:04.023881 containerd[1714]: time="2024-07-02T00:25:04.023815540Z" level=info msg="CreateContainer within sandbox \"65ac942ac6574eaa25d839c3b748348c0001149715aa068c3e2f41610ad005b4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 00:25:04.056526 containerd[1714]: time="2024-07-02T00:25:04.056493593Z" level=info msg="CreateContainer within sandbox \"65ac942ac6574eaa25d839c3b748348c0001149715aa068c3e2f41610ad005b4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"96beb4b682fc744cb0531c188fd7412274e1c6878351de869950b108709083d2\"" Jul 2 00:25:04.057029 containerd[1714]: time="2024-07-02T00:25:04.056922299Z" level=info msg="StartContainer for \"96beb4b682fc744cb0531c188fd7412274e1c6878351de869950b108709083d2\"" Jul 2 00:25:04.093330 systemd[1]: run-containerd-runc-k8s.io-96beb4b682fc744cb0531c188fd7412274e1c6878351de869950b108709083d2-runc.XaoXsR.mount: Deactivated successfully. Jul 2 00:25:04.102589 systemd[1]: Started cri-containerd-96beb4b682fc744cb0531c188fd7412274e1c6878351de869950b108709083d2.scope - libcontainer container 96beb4b682fc744cb0531c188fd7412274e1c6878351de869950b108709083d2. Jul 2 00:25:04.147023 containerd[1714]: time="2024-07-02T00:25:04.146909747Z" level=info msg="StartContainer for \"96beb4b682fc744cb0531c188fd7412274e1c6878351de869950b108709083d2\" returns successfully" Jul 2 00:25:04.736069 kubelet[3245]: I0702 00:25:04.735648 3245 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57f5b486c7-ssg2b" podStartSLOduration=3.083758646 podStartE2EDuration="5.735580008s" podCreationTimestamp="2024-07-02 00:24:59 +0000 UTC" firstStartedPulling="2024-07-02 00:25:01.369993651 +0000 UTC m=+121.434613941" lastFinishedPulling="2024-07-02 00:25:04.021815013 +0000 UTC m=+124.086435303" observedRunningTime="2024-07-02 00:25:04.733051572 +0000 UTC m=+124.797671962" watchObservedRunningTime="2024-07-02 00:25:04.735580008 +0000 UTC m=+124.800200598" Jul 2 00:25:04.746514 containerd[1714]: time="2024-07-02T00:25:04.746472659Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:04.749894 containerd[1714]: time="2024-07-02T00:25:04.748728690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=77" Jul 2 00:25:04.754081 containerd[1714]: time="2024-07-02T00:25:04.754047964Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 731.437739ms" Jul 2 00:25:04.754224 containerd[1714]: time="2024-07-02T00:25:04.754205066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jul 2 00:25:04.757461 containerd[1714]: time="2024-07-02T00:25:04.756990804Z" level=info msg="CreateContainer within sandbox \"8078a66f4e1ea9e8b64b376374a7616271040e9aac8cbecef3b5fe1e0ae15701\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 00:25:04.792130 containerd[1714]: time="2024-07-02T00:25:04.792094191Z" level=info msg="CreateContainer within sandbox \"8078a66f4e1ea9e8b64b376374a7616271040e9aac8cbecef3b5fe1e0ae15701\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"597df271edf63b58a0ace350c4e9c1d3e1082b44903658360fd458d6ee301edf\"" Jul 2 00:25:04.793767 containerd[1714]: time="2024-07-02T00:25:04.792941303Z" level=info msg="StartContainer for \"597df271edf63b58a0ace350c4e9c1d3e1082b44903658360fd458d6ee301edf\"" Jul 2 00:25:04.830745 systemd[1]: Started cri-containerd-597df271edf63b58a0ace350c4e9c1d3e1082b44903658360fd458d6ee301edf.scope - libcontainer container 597df271edf63b58a0ace350c4e9c1d3e1082b44903658360fd458d6ee301edf. Jul 2 00:25:04.899635 containerd[1714]: time="2024-07-02T00:25:04.899595781Z" level=info msg="StartContainer for \"597df271edf63b58a0ace350c4e9c1d3e1082b44903658360fd458d6ee301edf\" returns successfully" Jul 2 00:25:05.734765 kubelet[3245]: I0702 00:25:05.734726 3245 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57f5b486c7-qhlvn" podStartSLOduration=3.391671914 podStartE2EDuration="6.734679458s" podCreationTimestamp="2024-07-02 00:24:59 +0000 UTC" firstStartedPulling="2024-07-02 00:25:01.411508826 +0000 UTC m=+121.476129216" lastFinishedPulling="2024-07-02 00:25:04.75451647 +0000 UTC m=+124.819136760" observedRunningTime="2024-07-02 00:25:05.734425254 +0000 UTC m=+125.799045544" watchObservedRunningTime="2024-07-02 00:25:05.734679458 +0000 UTC m=+125.799299848" Jul 2 00:25:12.511324 update_engine[1693]: I0702 00:25:12.511263 1693 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 00:25:12.511966 update_engine[1693]: I0702 00:25:12.511552 1693 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 00:25:12.511966 update_engine[1693]: I0702 00:25:12.511870 1693 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 00:25:12.533087 update_engine[1693]: E0702 00:25:12.533052 1693 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 00:25:12.533226 update_engine[1693]: I0702 00:25:12.533111 1693 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 2 00:25:12.533226 update_engine[1693]: I0702 00:25:12.533119 1693 omaha_request_action.cc:617] Omaha request response: Jul 2 00:25:12.533226 update_engine[1693]: E0702 00:25:12.533202 1693 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 2 00:25:12.533226 update_engine[1693]: I0702 00:25:12.533226 1693 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 2 00:25:12.533396 update_engine[1693]: I0702 00:25:12.533231 1693 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 00:25:12.533396 update_engine[1693]: I0702 00:25:12.533234 1693 update_attempter.cc:306] Processing Done. Jul 2 00:25:12.533396 update_engine[1693]: E0702 00:25:12.533250 1693 update_attempter.cc:619] Update failed. Jul 2 00:25:12.533396 update_engine[1693]: I0702 00:25:12.533254 1693 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 2 00:25:12.533396 update_engine[1693]: I0702 00:25:12.533258 1693 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 2 00:25:12.533396 update_engine[1693]: I0702 00:25:12.533264 1693 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 2 00:25:12.533396 update_engine[1693]: I0702 00:25:12.533347 1693 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 00:25:12.533396 update_engine[1693]: I0702 00:25:12.533370 1693 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 2 00:25:12.533396 update_engine[1693]: I0702 00:25:12.533375 1693 omaha_request_action.cc:272] Request: Jul 2 00:25:12.533396 update_engine[1693]: Jul 2 00:25:12.533396 update_engine[1693]: Jul 2 00:25:12.533396 update_engine[1693]: Jul 2 00:25:12.533396 update_engine[1693]: Jul 2 00:25:12.533396 update_engine[1693]: Jul 2 00:25:12.533396 update_engine[1693]: Jul 2 00:25:12.533396 update_engine[1693]: I0702 00:25:12.533381 1693 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 00:25:12.534185 update_engine[1693]: I0702 00:25:12.533560 1693 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 00:25:12.534185 update_engine[1693]: I0702 00:25:12.533761 1693 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 00:25:12.534238 locksmithd[1749]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 2 00:25:12.562880 update_engine[1693]: E0702 00:25:12.562832 1693 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 00:25:12.563031 update_engine[1693]: I0702 00:25:12.562907 1693 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 2 00:25:12.563031 update_engine[1693]: I0702 00:25:12.562916 1693 omaha_request_action.cc:617] Omaha request response: Jul 2 00:25:12.563031 update_engine[1693]: I0702 00:25:12.562923 1693 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 00:25:12.563031 update_engine[1693]: I0702 00:25:12.562928 1693 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 00:25:12.563031 update_engine[1693]: I0702 00:25:12.562933 1693 update_attempter.cc:306] Processing Done. Jul 2 00:25:12.563031 update_engine[1693]: I0702 00:25:12.562940 1693 update_attempter.cc:310] Error event sent. Jul 2 00:25:12.563031 update_engine[1693]: I0702 00:25:12.562952 1693 update_check_scheduler.cc:74] Next update check in 41m3s Jul 2 00:25:12.563469 locksmithd[1749]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 2 00:25:33.281584 systemd[1]: Started sshd@7-10.200.8.39:22-10.200.16.10:51414.service - OpenSSH per-connection server daemon (10.200.16.10:51414). Jul 2 00:25:33.966319 sshd[5913]: Accepted publickey for core from 10.200.16.10 port 51414 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:25:33.967987 sshd[5913]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:33.972664 systemd-logind[1692]: New session 10 of user core. Jul 2 00:25:33.979617 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 00:25:34.636645 sshd[5913]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:34.640397 systemd-logind[1692]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:25:34.640699 systemd[1]: sshd@7-10.200.8.39:22-10.200.16.10:51414.service: Deactivated successfully. Jul 2 00:25:34.643090 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:25:34.645354 systemd-logind[1692]: Removed session 10. Jul 2 00:25:39.748533 systemd[1]: Started sshd@8-10.200.8.39:22-10.200.16.10:34694.service - OpenSSH per-connection server daemon (10.200.16.10:34694). Jul 2 00:25:40.429066 sshd[5932]: Accepted publickey for core from 10.200.16.10 port 34694 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:25:40.430758 sshd[5932]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:40.436598 systemd-logind[1692]: New session 11 of user core. Jul 2 00:25:40.440610 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 00:25:40.956791 sshd[5932]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:40.959822 systemd[1]: sshd@8-10.200.8.39:22-10.200.16.10:34694.service: Deactivated successfully. Jul 2 00:25:40.961946 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:25:40.963420 systemd-logind[1692]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:25:40.964528 systemd-logind[1692]: Removed session 11. Jul 2 00:25:46.072555 systemd[1]: Started sshd@9-10.200.8.39:22-10.200.16.10:34708.service - OpenSSH per-connection server daemon (10.200.16.10:34708). Jul 2 00:25:46.717510 sshd[5955]: Accepted publickey for core from 10.200.16.10 port 34708 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:25:46.719640 sshd[5955]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:46.724152 systemd-logind[1692]: New session 12 of user core. Jul 2 00:25:46.729609 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 00:25:47.240625 sshd[5955]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:47.245205 systemd[1]: sshd@9-10.200.8.39:22-10.200.16.10:34708.service: Deactivated successfully. Jul 2 00:25:47.247988 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:25:47.248922 systemd-logind[1692]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:25:47.249844 systemd-logind[1692]: Removed session 12. Jul 2 00:25:52.354638 systemd[1]: Started sshd@10-10.200.8.39:22-10.200.16.10:58044.service - OpenSSH per-connection server daemon (10.200.16.10:58044). Jul 2 00:25:53.020539 sshd[5988]: Accepted publickey for core from 10.200.16.10 port 58044 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:25:53.022050 sshd[5988]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:53.026730 systemd-logind[1692]: New session 13 of user core. Jul 2 00:25:53.031593 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 00:25:53.537777 sshd[5988]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:53.542047 systemd[1]: sshd@10-10.200.8.39:22-10.200.16.10:58044.service: Deactivated successfully. Jul 2 00:25:53.544040 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:25:53.544947 systemd-logind[1692]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:25:53.545900 systemd-logind[1692]: Removed session 13. Jul 2 00:25:53.651776 systemd[1]: Started sshd@11-10.200.8.39:22-10.200.16.10:58050.service - OpenSSH per-connection server daemon (10.200.16.10:58050). Jul 2 00:25:54.306781 sshd[6002]: Accepted publickey for core from 10.200.16.10 port 58050 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:25:54.307373 sshd[6002]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:54.312098 systemd-logind[1692]: New session 14 of user core. Jul 2 00:25:54.318585 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 00:25:54.847117 sshd[6002]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:54.850601 systemd[1]: sshd@11-10.200.8.39:22-10.200.16.10:58050.service: Deactivated successfully. Jul 2 00:25:54.852989 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:25:54.854922 systemd-logind[1692]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:25:54.856267 systemd-logind[1692]: Removed session 14. Jul 2 00:25:54.965741 systemd[1]: Started sshd@12-10.200.8.39:22-10.200.16.10:58056.service - OpenSSH per-connection server daemon (10.200.16.10:58056). Jul 2 00:25:55.601406 sshd[6013]: Accepted publickey for core from 10.200.16.10 port 58056 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:25:55.603484 sshd[6013]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:55.609386 systemd-logind[1692]: New session 15 of user core. Jul 2 00:25:55.613627 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 00:25:56.111106 sshd[6013]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:56.114562 systemd[1]: sshd@12-10.200.8.39:22-10.200.16.10:58056.service: Deactivated successfully. Jul 2 00:25:56.117063 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:25:56.118696 systemd-logind[1692]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:25:56.119762 systemd-logind[1692]: Removed session 15. Jul 2 00:26:01.228738 systemd[1]: Started sshd@13-10.200.8.39:22-10.200.16.10:44586.service - OpenSSH per-connection server daemon (10.200.16.10:44586). Jul 2 00:26:01.865422 sshd[6076]: Accepted publickey for core from 10.200.16.10 port 44586 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:26:01.866892 sshd[6076]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:01.871525 systemd-logind[1692]: New session 16 of user core. Jul 2 00:26:01.875579 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 00:26:02.377863 sshd[6076]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:02.382400 systemd[1]: sshd@13-10.200.8.39:22-10.200.16.10:44586.service: Deactivated successfully. Jul 2 00:26:02.384841 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:26:02.385583 systemd-logind[1692]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:26:02.386945 systemd-logind[1692]: Removed session 16. Jul 2 00:26:07.496791 systemd[1]: Started sshd@14-10.200.8.39:22-10.200.16.10:44602.service - OpenSSH per-connection server daemon (10.200.16.10:44602). Jul 2 00:26:08.171762 sshd[6100]: Accepted publickey for core from 10.200.16.10 port 44602 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:26:08.173816 sshd[6100]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:08.178355 systemd-logind[1692]: New session 17 of user core. Jul 2 00:26:08.183604 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 00:26:08.690904 sshd[6100]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:08.693935 systemd[1]: sshd@14-10.200.8.39:22-10.200.16.10:44602.service: Deactivated successfully. Jul 2 00:26:08.696251 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:26:08.697782 systemd-logind[1692]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:26:08.699187 systemd-logind[1692]: Removed session 17. Jul 2 00:26:13.806648 systemd[1]: Started sshd@15-10.200.8.39:22-10.200.16.10:33254.service - OpenSSH per-connection server daemon (10.200.16.10:33254). Jul 2 00:26:14.459687 sshd[6113]: Accepted publickey for core from 10.200.16.10 port 33254 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:26:14.461124 sshd[6113]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:14.465809 systemd-logind[1692]: New session 18 of user core. Jul 2 00:26:14.471596 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 00:26:14.975179 sshd[6113]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:14.980261 systemd[1]: sshd@15-10.200.8.39:22-10.200.16.10:33254.service: Deactivated successfully. Jul 2 00:26:14.983165 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:26:14.983970 systemd-logind[1692]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:26:14.984964 systemd-logind[1692]: Removed session 18. Jul 2 00:26:20.091500 systemd[1]: Started sshd@16-10.200.8.39:22-10.200.16.10:40954.service - OpenSSH per-connection server daemon (10.200.16.10:40954). Jul 2 00:26:20.742359 sshd[6146]: Accepted publickey for core from 10.200.16.10 port 40954 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:26:20.744272 sshd[6146]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:20.750582 systemd-logind[1692]: New session 19 of user core. Jul 2 00:26:20.754575 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 00:26:21.259638 sshd[6146]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:21.263204 systemd[1]: sshd@16-10.200.8.39:22-10.200.16.10:40954.service: Deactivated successfully. Jul 2 00:26:21.265695 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:26:21.267402 systemd-logind[1692]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:26:21.268534 systemd-logind[1692]: Removed session 19. Jul 2 00:26:21.374581 systemd[1]: Started sshd@17-10.200.8.39:22-10.200.16.10:40962.service - OpenSSH per-connection server daemon (10.200.16.10:40962). Jul 2 00:26:22.024506 sshd[6161]: Accepted publickey for core from 10.200.16.10 port 40962 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:26:22.025943 sshd[6161]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:22.030538 systemd-logind[1692]: New session 20 of user core. Jul 2 00:26:22.033630 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 00:26:22.586319 sshd[6161]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:22.590194 systemd[1]: sshd@17-10.200.8.39:22-10.200.16.10:40962.service: Deactivated successfully. Jul 2 00:26:22.592304 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:26:22.593264 systemd-logind[1692]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:26:22.594241 systemd-logind[1692]: Removed session 20. Jul 2 00:26:22.702387 systemd[1]: Started sshd@18-10.200.8.39:22-10.200.16.10:40964.service - OpenSSH per-connection server daemon (10.200.16.10:40964). Jul 2 00:26:23.355155 sshd[6172]: Accepted publickey for core from 10.200.16.10 port 40964 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:26:23.356640 sshd[6172]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:23.361309 systemd-logind[1692]: New session 21 of user core. Jul 2 00:26:23.365593 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 00:26:25.587078 systemd[1]: run-containerd-runc-k8s.io-fc59ab1729315835bae293ddbcd87b28c1bb18114e722a33bc837c12f5b4bb84-runc.HIithl.mount: Deactivated successfully. Jul 2 00:26:26.120701 sshd[6172]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:26.125220 systemd[1]: sshd@18-10.200.8.39:22-10.200.16.10:40964.service: Deactivated successfully. Jul 2 00:26:26.127094 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 00:26:26.127921 systemd-logind[1692]: Session 21 logged out. Waiting for processes to exit. Jul 2 00:26:26.129042 systemd-logind[1692]: Removed session 21. Jul 2 00:26:26.237723 systemd[1]: Started sshd@19-10.200.8.39:22-10.200.16.10:40966.service - OpenSSH per-connection server daemon (10.200.16.10:40966). Jul 2 00:26:26.878664 sshd[6212]: Accepted publickey for core from 10.200.16.10 port 40966 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:26:26.880158 sshd[6212]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:26.885998 systemd-logind[1692]: New session 22 of user core. Jul 2 00:26:26.890631 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 00:26:27.496772 sshd[6212]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:27.499821 systemd[1]: sshd@19-10.200.8.39:22-10.200.16.10:40966.service: Deactivated successfully. Jul 2 00:26:27.502168 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 00:26:27.503906 systemd-logind[1692]: Session 22 logged out. Waiting for processes to exit. Jul 2 00:26:27.504999 systemd-logind[1692]: Removed session 22. Jul 2 00:26:27.616724 systemd[1]: Started sshd@20-10.200.8.39:22-10.200.16.10:40972.service - OpenSSH per-connection server daemon (10.200.16.10:40972). Jul 2 00:26:28.260744 sshd[6251]: Accepted publickey for core from 10.200.16.10 port 40972 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:26:28.262329 sshd[6251]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:28.266997 systemd-logind[1692]: New session 23 of user core. Jul 2 00:26:28.272596 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 00:26:28.787717 sshd[6251]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:28.791713 systemd[1]: sshd@20-10.200.8.39:22-10.200.16.10:40972.service: Deactivated successfully. Jul 2 00:26:28.793841 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 00:26:28.794791 systemd-logind[1692]: Session 23 logged out. Waiting for processes to exit. Jul 2 00:26:28.795883 systemd-logind[1692]: Removed session 23. Jul 2 00:26:33.914761 systemd[1]: Started sshd@21-10.200.8.39:22-10.200.16.10:56140.service - OpenSSH per-connection server daemon (10.200.16.10:56140). Jul 2 00:26:34.565909 sshd[6266]: Accepted publickey for core from 10.200.16.10 port 56140 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:26:34.567374 sshd[6266]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:34.571490 systemd-logind[1692]: New session 24 of user core. Jul 2 00:26:34.577594 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 00:26:35.112570 sshd[6266]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:35.117221 systemd[1]: sshd@21-10.200.8.39:22-10.200.16.10:56140.service: Deactivated successfully. Jul 2 00:26:35.120039 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 00:26:35.120991 systemd-logind[1692]: Session 24 logged out. Waiting for processes to exit. Jul 2 00:26:35.122267 systemd-logind[1692]: Removed session 24. Jul 2 00:26:40.226578 systemd[1]: Started sshd@22-10.200.8.39:22-10.200.16.10:39216.service - OpenSSH per-connection server daemon (10.200.16.10:39216). Jul 2 00:26:40.869949 sshd[6285]: Accepted publickey for core from 10.200.16.10 port 39216 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:26:40.871693 sshd[6285]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:40.876908 systemd-logind[1692]: New session 25 of user core. Jul 2 00:26:40.882961 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 00:26:41.382962 sshd[6285]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:41.387531 systemd[1]: sshd@22-10.200.8.39:22-10.200.16.10:39216.service: Deactivated successfully. Jul 2 00:26:41.390216 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 00:26:41.391543 systemd-logind[1692]: Session 25 logged out. Waiting for processes to exit. Jul 2 00:26:41.392765 systemd-logind[1692]: Removed session 25. Jul 2 00:26:46.499689 systemd[1]: Started sshd@23-10.200.8.39:22-10.200.16.10:39220.service - OpenSSH per-connection server daemon (10.200.16.10:39220). Jul 2 00:26:47.155466 sshd[6303]: Accepted publickey for core from 10.200.16.10 port 39220 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:26:47.157001 sshd[6303]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:47.161649 systemd-logind[1692]: New session 26 of user core. Jul 2 00:26:47.165614 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 00:26:47.670977 sshd[6303]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:47.675470 systemd[1]: sshd@23-10.200.8.39:22-10.200.16.10:39220.service: Deactivated successfully. Jul 2 00:26:47.678149 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 00:26:47.678985 systemd-logind[1692]: Session 26 logged out. Waiting for processes to exit. Jul 2 00:26:47.680007 systemd-logind[1692]: Removed session 26. Jul 2 00:26:52.781755 systemd[1]: Started sshd@24-10.200.8.39:22-10.200.16.10:47098.service - OpenSSH per-connection server daemon (10.200.16.10:47098). Jul 2 00:26:53.426170 sshd[6339]: Accepted publickey for core from 10.200.16.10 port 47098 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:26:53.427794 sshd[6339]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:53.432446 systemd-logind[1692]: New session 27 of user core. Jul 2 00:26:53.438576 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 00:26:53.937958 sshd[6339]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:53.941187 systemd[1]: sshd@24-10.200.8.39:22-10.200.16.10:47098.service: Deactivated successfully. Jul 2 00:26:53.943711 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 00:26:53.945350 systemd-logind[1692]: Session 27 logged out. Waiting for processes to exit. Jul 2 00:26:53.946630 systemd-logind[1692]: Removed session 27. Jul 2 00:26:59.058739 systemd[1]: Started sshd@25-10.200.8.39:22-10.200.16.10:54002.service - OpenSSH per-connection server daemon (10.200.16.10:54002). Jul 2 00:26:59.700144 sshd[6396]: Accepted publickey for core from 10.200.16.10 port 54002 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:26:59.701680 sshd[6396]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:59.706566 systemd-logind[1692]: New session 28 of user core. Jul 2 00:26:59.713597 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 2 00:27:00.214729 sshd[6396]: pam_unix(sshd:session): session closed for user core Jul 2 00:27:00.219088 systemd[1]: sshd@25-10.200.8.39:22-10.200.16.10:54002.service: Deactivated successfully. Jul 2 00:27:00.221231 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 00:27:00.222213 systemd-logind[1692]: Session 28 logged out. Waiting for processes to exit. Jul 2 00:27:00.223275 systemd-logind[1692]: Removed session 28. Jul 2 00:27:05.329571 systemd[1]: Started sshd@26-10.200.8.39:22-10.200.16.10:54006.service - OpenSSH per-connection server daemon (10.200.16.10:54006). Jul 2 00:27:05.969920 sshd[6410]: Accepted publickey for core from 10.200.16.10 port 54006 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:27:05.971473 sshd[6410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:27:05.976518 systemd-logind[1692]: New session 29 of user core. Jul 2 00:27:05.987591 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 2 00:27:06.481042 sshd[6410]: pam_unix(sshd:session): session closed for user core Jul 2 00:27:06.484509 systemd[1]: sshd@26-10.200.8.39:22-10.200.16.10:54006.service: Deactivated successfully. Jul 2 00:27:06.487297 systemd[1]: session-29.scope: Deactivated successfully. Jul 2 00:27:06.489055 systemd-logind[1692]: Session 29 logged out. Waiting for processes to exit. Jul 2 00:27:06.490081 systemd-logind[1692]: Removed session 29. Jul 2 00:27:21.556701 systemd[1]: cri-containerd-fe2d04f3f3fadedf7e74c2c057908c4deb9e2b5544ddf9aeaf246ea942863e0e.scope: Deactivated successfully. Jul 2 00:27:21.557011 systemd[1]: cri-containerd-fe2d04f3f3fadedf7e74c2c057908c4deb9e2b5544ddf9aeaf246ea942863e0e.scope: Consumed 3.206s CPU time, 22.0M memory peak, 0B memory swap peak. Jul 2 00:27:21.582378 containerd[1714]: time="2024-07-02T00:27:21.580790572Z" level=info msg="shim disconnected" id=fe2d04f3f3fadedf7e74c2c057908c4deb9e2b5544ddf9aeaf246ea942863e0e namespace=k8s.io Jul 2 00:27:21.582378 containerd[1714]: time="2024-07-02T00:27:21.580869173Z" level=warning msg="cleaning up after shim disconnected" id=fe2d04f3f3fadedf7e74c2c057908c4deb9e2b5544ddf9aeaf246ea942863e0e namespace=k8s.io Jul 2 00:27:21.582378 containerd[1714]: time="2024-07-02T00:27:21.580882173Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:27:21.588000 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe2d04f3f3fadedf7e74c2c057908c4deb9e2b5544ddf9aeaf246ea942863e0e-rootfs.mount: Deactivated successfully. Jul 2 00:27:22.028201 kubelet[3245]: I0702 00:27:22.028172 3245 scope.go:117] "RemoveContainer" containerID="fe2d04f3f3fadedf7e74c2c057908c4deb9e2b5544ddf9aeaf246ea942863e0e" Jul 2 00:27:22.032838 containerd[1714]: time="2024-07-02T00:27:22.032795768Z" level=info msg="CreateContainer within sandbox \"d3ea784c90bf3cee6f71e4d630c8f58c74f6ed5c2c3ccb05a93e741a6681cab5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 2 00:27:22.068376 containerd[1714]: time="2024-07-02T00:27:22.068324855Z" level=info msg="CreateContainer within sandbox \"d3ea784c90bf3cee6f71e4d630c8f58c74f6ed5c2c3ccb05a93e741a6681cab5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"687b5192912afe4010a676e84f92e3644933d3e2502ec0a97e5b4c03fc758394\"" Jul 2 00:27:22.069045 containerd[1714]: time="2024-07-02T00:27:22.069006264Z" level=info msg="StartContainer for \"687b5192912afe4010a676e84f92e3644933d3e2502ec0a97e5b4c03fc758394\"" Jul 2 00:27:22.112593 systemd[1]: Started cri-containerd-687b5192912afe4010a676e84f92e3644933d3e2502ec0a97e5b4c03fc758394.scope - libcontainer container 687b5192912afe4010a676e84f92e3644933d3e2502ec0a97e5b4c03fc758394. Jul 2 00:27:22.135408 systemd[1]: cri-containerd-50bcda646d534c145ea869066c3f95dbb4bb1aed150651a6e0b2611f2b76a0ef.scope: Deactivated successfully. Jul 2 00:27:22.135697 systemd[1]: cri-containerd-50bcda646d534c145ea869066c3f95dbb4bb1aed150651a6e0b2611f2b76a0ef.scope: Consumed 6.721s CPU time. Jul 2 00:27:22.169166 containerd[1714]: time="2024-07-02T00:27:22.169095737Z" level=info msg="shim disconnected" id=50bcda646d534c145ea869066c3f95dbb4bb1aed150651a6e0b2611f2b76a0ef namespace=k8s.io Jul 2 00:27:22.169166 containerd[1714]: time="2024-07-02T00:27:22.169164738Z" level=warning msg="cleaning up after shim disconnected" id=50bcda646d534c145ea869066c3f95dbb4bb1aed150651a6e0b2611f2b76a0ef namespace=k8s.io Jul 2 00:27:22.170069 containerd[1714]: time="2024-07-02T00:27:22.169176238Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:27:22.177772 containerd[1714]: time="2024-07-02T00:27:22.177723055Z" level=info msg="StartContainer for \"687b5192912afe4010a676e84f92e3644933d3e2502ec0a97e5b4c03fc758394\" returns successfully" Jul 2 00:27:22.590791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50bcda646d534c145ea869066c3f95dbb4bb1aed150651a6e0b2611f2b76a0ef-rootfs.mount: Deactivated successfully. Jul 2 00:27:22.595198 kubelet[3245]: E0702 00:27:22.594993 3245 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.39:48920->10.200.8.25:2379: read: connection timed out" Jul 2 00:27:22.600625 systemd[1]: cri-containerd-61ae8b1c2050c59eda7fc9ad7a6c3815694aec37764564642136ceda756f9b74.scope: Deactivated successfully. Jul 2 00:27:22.601157 systemd[1]: cri-containerd-61ae8b1c2050c59eda7fc9ad7a6c3815694aec37764564642136ceda756f9b74.scope: Consumed 1.811s CPU time, 16.4M memory peak, 0B memory swap peak. Jul 2 00:27:22.632644 containerd[1714]: time="2024-07-02T00:27:22.632578790Z" level=info msg="shim disconnected" id=61ae8b1c2050c59eda7fc9ad7a6c3815694aec37764564642136ceda756f9b74 namespace=k8s.io Jul 2 00:27:22.637816 containerd[1714]: time="2024-07-02T00:27:22.637476457Z" level=warning msg="cleaning up after shim disconnected" id=61ae8b1c2050c59eda7fc9ad7a6c3815694aec37764564642136ceda756f9b74 namespace=k8s.io Jul 2 00:27:22.637816 containerd[1714]: time="2024-07-02T00:27:22.637511658Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:27:22.638852 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61ae8b1c2050c59eda7fc9ad7a6c3815694aec37764564642136ceda756f9b74-rootfs.mount: Deactivated successfully. Jul 2 00:27:22.659453 containerd[1714]: time="2024-07-02T00:27:22.659383458Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:27:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 00:27:23.031960 kubelet[3245]: I0702 00:27:23.031928 3245 scope.go:117] "RemoveContainer" containerID="50bcda646d534c145ea869066c3f95dbb4bb1aed150651a6e0b2611f2b76a0ef" Jul 2 00:27:23.035470 containerd[1714]: time="2024-07-02T00:27:23.035323516Z" level=info msg="CreateContainer within sandbox \"2d44831275061849d6351bed02c5c4ef036ba28d79f24e6bd1812c773b505917\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 2 00:27:23.037361 kubelet[3245]: I0702 00:27:23.036919 3245 scope.go:117] "RemoveContainer" containerID="61ae8b1c2050c59eda7fc9ad7a6c3815694aec37764564642136ceda756f9b74" Jul 2 00:27:23.038904 containerd[1714]: time="2024-07-02T00:27:23.038873765Z" level=info msg="CreateContainer within sandbox \"bd383f941dc0da71a0ead1c43c7f61b7639c2dd1bab9929abf1c98b5e3af03f1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 2 00:27:23.083535 containerd[1714]: time="2024-07-02T00:27:23.083489677Z" level=info msg="CreateContainer within sandbox \"2d44831275061849d6351bed02c5c4ef036ba28d79f24e6bd1812c773b505917\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"5bf479842eb1f7393d39da75afc3d69ffc41c28cfc6be62f238b192eb65142be\"" Jul 2 00:27:23.084074 containerd[1714]: time="2024-07-02T00:27:23.084037384Z" level=info msg="StartContainer for \"5bf479842eb1f7393d39da75afc3d69ffc41c28cfc6be62f238b192eb65142be\"" Jul 2 00:27:23.097405 containerd[1714]: time="2024-07-02T00:27:23.097357067Z" level=info msg="CreateContainer within sandbox \"bd383f941dc0da71a0ead1c43c7f61b7639c2dd1bab9929abf1c98b5e3af03f1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"7864253a622581767af397e9b55c9f7ab8bf08dbb51a992016a8ba2d2f4a6c1a\"" Jul 2 00:27:23.098804 containerd[1714]: time="2024-07-02T00:27:23.098476483Z" level=info msg="StartContainer for \"7864253a622581767af397e9b55c9f7ab8bf08dbb51a992016a8ba2d2f4a6c1a\"" Jul 2 00:27:23.114670 systemd[1]: Started cri-containerd-5bf479842eb1f7393d39da75afc3d69ffc41c28cfc6be62f238b192eb65142be.scope - libcontainer container 5bf479842eb1f7393d39da75afc3d69ffc41c28cfc6be62f238b192eb65142be. Jul 2 00:27:23.141859 systemd[1]: Started cri-containerd-7864253a622581767af397e9b55c9f7ab8bf08dbb51a992016a8ba2d2f4a6c1a.scope - libcontainer container 7864253a622581767af397e9b55c9f7ab8bf08dbb51a992016a8ba2d2f4a6c1a. Jul 2 00:27:23.158457 containerd[1714]: time="2024-07-02T00:27:23.158072201Z" level=info msg="StartContainer for \"5bf479842eb1f7393d39da75afc3d69ffc41c28cfc6be62f238b192eb65142be\" returns successfully" Jul 2 00:27:23.207662 containerd[1714]: time="2024-07-02T00:27:23.207620081Z" level=info msg="StartContainer for \"7864253a622581767af397e9b55c9f7ab8bf08dbb51a992016a8ba2d2f4a6c1a\" returns successfully" Jul 2 00:27:23.595464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2596029590.mount: Deactivated successfully. Jul 2 00:27:26.229638 kubelet[3245]: E0702 00:27:26.229591 3245 event.go:346] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.39:48724->10.200.8.25:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-3975.1.1-a-7b42818af6.17de3dc7bccaff9a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-3975.1.1-a-7b42818af6,UID:13af3f3d48ce1a08e90c8b035797f219,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-3975.1.1-a-7b42818af6,},FirstTimestamp:2024-07-02 00:27:15.777314714 +0000 UTC m=+255.841935004,LastTimestamp:2024-07-02 00:27:15.777314714 +0000 UTC m=+255.841935004,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975.1.1-a-7b42818af6,}" Jul 2 00:27:26.748346 systemd[1]: run-containerd-runc-k8s.io-33c584dc9b921e3b3cf4c236a13cbd5ac37d60c8e555421daafdb89591c90eef-runc.4s9dLa.mount: Deactivated successfully. Jul 2 00:27:32.245588 kubelet[3245]: I0702 00:27:32.245529 3245 status_manager.go:853] "Failed to get status for pod" podUID="13af3f3d48ce1a08e90c8b035797f219" pod="kube-system/kube-apiserver-ci-3975.1.1-a-7b42818af6" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.39:48846->10.200.8.25:2379: read: connection timed out" Jul 2 00:27:32.596061 kubelet[3245]: E0702 00:27:32.595297 3245 controller.go:195] "Failed to update lease" err="Put \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-7b42818af6?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 00:27:34.681173 systemd[1]: cri-containerd-5bf479842eb1f7393d39da75afc3d69ffc41c28cfc6be62f238b192eb65142be.scope: Deactivated successfully. Jul 2 00:27:34.705094 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bf479842eb1f7393d39da75afc3d69ffc41c28cfc6be62f238b192eb65142be-rootfs.mount: Deactivated successfully. Jul 2 00:27:34.734483 containerd[1714]: time="2024-07-02T00:27:34.734380004Z" level=info msg="shim disconnected" id=5bf479842eb1f7393d39da75afc3d69ffc41c28cfc6be62f238b192eb65142be namespace=k8s.io Jul 2 00:27:34.735127 containerd[1714]: time="2024-07-02T00:27:34.734502705Z" level=warning msg="cleaning up after shim disconnected" id=5bf479842eb1f7393d39da75afc3d69ffc41c28cfc6be62f238b192eb65142be namespace=k8s.io Jul 2 00:27:34.735127 containerd[1714]: time="2024-07-02T00:27:34.734523206Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:27:35.071066 kubelet[3245]: I0702 00:27:35.070871 3245 scope.go:117] "RemoveContainer" containerID="50bcda646d534c145ea869066c3f95dbb4bb1aed150651a6e0b2611f2b76a0ef" Jul 2 00:27:35.071597 kubelet[3245]: I0702 00:27:35.071159 3245 scope.go:117] "RemoveContainer" containerID="5bf479842eb1f7393d39da75afc3d69ffc41c28cfc6be62f238b192eb65142be" Jul 2 00:27:35.071597 kubelet[3245]: E0702 00:27:35.071534 3245 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-76c4974c85-v68zg_tigera-operator(83c9a88f-213e-4153-ae3b-3e5c8407a5ef)\"" pod="tigera-operator/tigera-operator-76c4974c85-v68zg" podUID="83c9a88f-213e-4153-ae3b-3e5c8407a5ef" Jul 2 00:27:35.073026 containerd[1714]: time="2024-07-02T00:27:35.072982244Z" level=info msg="RemoveContainer for \"50bcda646d534c145ea869066c3f95dbb4bb1aed150651a6e0b2611f2b76a0ef\"" Jul 2 00:27:35.084525 containerd[1714]: time="2024-07-02T00:27:35.084474602Z" level=info msg="RemoveContainer for \"50bcda646d534c145ea869066c3f95dbb4bb1aed150651a6e0b2611f2b76a0ef\" returns successfully" Jul 2 00:27:42.596071 kubelet[3245]: E0702 00:27:42.595985 3245 controller.go:195] "Failed to update lease" err="Put \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-7b42818af6?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"