Jul 2 00:21:21.118691 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 22:47:51 -00 2024 Jul 2 00:21:21.118727 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:21:21.118742 kernel: BIOS-provided physical RAM map: Jul 2 00:21:21.118753 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 2 00:21:21.118763 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jul 2 00:21:21.118773 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jul 2 00:21:21.118787 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jul 2 00:21:21.118801 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jul 2 00:21:21.118812 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jul 2 00:21:21.118823 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jul 2 00:21:21.118834 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jul 2 00:21:21.118845 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jul 2 00:21:21.118856 kernel: printk: bootconsole [earlyser0] enabled Jul 2 00:21:21.118868 kernel: NX (Execute Disable) protection: active Jul 2 00:21:21.118884 kernel: APIC: Static calls initialized Jul 2 00:21:21.118897 kernel: efi: EFI v2.7 by Microsoft Jul 2 00:21:21.118909 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee73a98 Jul 2 00:21:21.118922 kernel: SMBIOS 3.1.0 present. Jul 2 00:21:21.118934 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jul 2 00:21:21.118946 kernel: Hypervisor detected: Microsoft Hyper-V Jul 2 00:21:21.118959 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jul 2 00:21:21.118971 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jul 2 00:21:21.118983 kernel: Hyper-V: Nested features: 0x1e0101 Jul 2 00:21:21.118995 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jul 2 00:21:21.119010 kernel: Hyper-V: Using hypercall for remote TLB flush Jul 2 00:21:21.119023 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 2 00:21:21.119035 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 2 00:21:21.119048 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jul 2 00:21:21.119061 kernel: tsc: Detected 2593.906 MHz processor Jul 2 00:21:21.119074 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 00:21:21.119087 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 00:21:21.119099 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jul 2 00:21:21.119112 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 2 00:21:21.119127 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 00:21:21.119139 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jul 2 00:21:21.119152 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jul 2 00:21:21.119164 kernel: Using GB pages for direct mapping Jul 2 00:21:21.119176 kernel: Secure boot disabled Jul 2 00:21:21.119189 kernel: ACPI: Early table checksum verification disabled Jul 2 00:21:21.119202 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jul 2 00:21:21.119219 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:21:21.119236 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:21:21.119249 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jul 2 00:21:21.119262 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jul 2 00:21:21.119276 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:21:21.119290 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:21:21.119303 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:21:21.119319 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:21:21.119332 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:21:21.119345 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:21:21.119359 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:21:21.119372 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jul 2 00:21:21.119386 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jul 2 00:21:21.119399 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jul 2 00:21:21.119413 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jul 2 00:21:21.119428 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jul 2 00:21:21.119449 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jul 2 00:21:21.119462 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jul 2 00:21:21.119475 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jul 2 00:21:21.119489 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jul 2 00:21:21.119502 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jul 2 00:21:21.119515 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 00:21:21.119528 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 2 00:21:21.119542 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jul 2 00:21:21.119558 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jul 2 00:21:21.119571 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jul 2 00:21:21.119585 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jul 2 00:21:21.119598 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jul 2 00:21:21.119617 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jul 2 00:21:21.119630 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jul 2 00:21:21.119642 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jul 2 00:21:21.119662 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jul 2 00:21:21.119688 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jul 2 00:21:21.119717 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jul 2 00:21:21.119729 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jul 2 00:21:21.119742 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jul 2 00:21:21.119755 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jul 2 00:21:21.119768 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jul 2 00:21:21.119782 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jul 2 00:21:21.119794 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jul 2 00:21:21.119805 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jul 2 00:21:21.119818 kernel: Zone ranges: Jul 2 00:21:21.119834 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 00:21:21.119847 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 2 00:21:21.119858 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jul 2 00:21:21.119870 kernel: Movable zone start for each node Jul 2 00:21:21.119882 kernel: Early memory node ranges Jul 2 00:21:21.119894 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 2 00:21:21.119906 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jul 2 00:21:21.119916 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jul 2 00:21:21.119937 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jul 2 00:21:21.119954 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jul 2 00:21:21.119965 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 00:21:21.119976 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 2 00:21:21.119987 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jul 2 00:21:21.119999 kernel: ACPI: PM-Timer IO Port: 0x408 Jul 2 00:21:21.120012 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jul 2 00:21:21.120023 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jul 2 00:21:21.120035 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 00:21:21.120047 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 00:21:21.120063 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jul 2 00:21:21.120076 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 00:21:21.120090 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jul 2 00:21:21.120101 kernel: Booting paravirtualized kernel on Hyper-V Jul 2 00:21:21.120113 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 00:21:21.120126 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 2 00:21:21.120140 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jul 2 00:21:21.120154 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jul 2 00:21:21.120167 kernel: pcpu-alloc: [0] 0 1 Jul 2 00:21:21.120183 kernel: Hyper-V: PV spinlocks enabled Jul 2 00:21:21.120197 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 00:21:21.120213 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:21:21.120227 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:21:21.120240 kernel: random: crng init done Jul 2 00:21:21.120253 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 2 00:21:21.120267 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 00:21:21.120280 kernel: Fallback order for Node 0: 0 Jul 2 00:21:21.120296 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jul 2 00:21:21.120320 kernel: Policy zone: Normal Jul 2 00:21:21.120337 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:21:21.120351 kernel: software IO TLB: area num 2. Jul 2 00:21:21.120366 kernel: Memory: 8070932K/8387460K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49328K init, 2016K bss, 316268K reserved, 0K cma-reserved) Jul 2 00:21:21.120380 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 00:21:21.120395 kernel: ftrace: allocating 37658 entries in 148 pages Jul 2 00:21:21.120409 kernel: ftrace: allocated 148 pages with 3 groups Jul 2 00:21:21.120423 kernel: Dynamic Preempt: voluntary Jul 2 00:21:21.120463 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:21:21.120483 kernel: rcu: RCU event tracing is enabled. Jul 2 00:21:21.120501 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 00:21:21.120516 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:21:21.120530 kernel: Rude variant of Tasks RCU enabled. Jul 2 00:21:21.120545 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:21:21.120559 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:21:21.120577 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 00:21:21.120591 kernel: Using NULL legacy PIC Jul 2 00:21:21.120606 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jul 2 00:21:21.120620 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 00:21:21.120635 kernel: Console: colour dummy device 80x25 Jul 2 00:21:21.120649 kernel: printk: console [tty1] enabled Jul 2 00:21:21.120664 kernel: printk: console [ttyS0] enabled Jul 2 00:21:21.120677 kernel: printk: bootconsole [earlyser0] disabled Jul 2 00:21:21.120690 kernel: ACPI: Core revision 20230628 Jul 2 00:21:21.120708 kernel: Failed to register legacy timer interrupt Jul 2 00:21:21.120725 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 00:21:21.120739 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 2 00:21:21.120753 kernel: Hyper-V: Using IPI hypercalls Jul 2 00:21:21.120767 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jul 2 00:21:21.120781 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jul 2 00:21:21.120795 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jul 2 00:21:21.120810 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jul 2 00:21:21.120824 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jul 2 00:21:21.120838 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jul 2 00:21:21.120856 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Jul 2 00:21:21.120870 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 2 00:21:21.120882 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jul 2 00:21:21.120895 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 00:21:21.120908 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 00:21:21.120922 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 00:21:21.120935 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 00:21:21.120949 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 2 00:21:21.120963 kernel: RETBleed: Vulnerable Jul 2 00:21:21.120979 kernel: Speculative Store Bypass: Vulnerable Jul 2 00:21:21.120992 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 00:21:21.121005 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 00:21:21.121019 kernel: GDS: Unknown: Dependent on hypervisor status Jul 2 00:21:21.121033 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 00:21:21.121046 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 00:21:21.121060 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 00:21:21.121073 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 2 00:21:21.121087 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 2 00:21:21.121101 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 2 00:21:21.121114 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 00:21:21.121130 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jul 2 00:21:21.121143 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jul 2 00:21:21.121157 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jul 2 00:21:21.121170 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jul 2 00:21:21.121184 kernel: Freeing SMP alternatives memory: 32K Jul 2 00:21:21.121197 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:21:21.121210 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 00:21:21.121224 kernel: SELinux: Initializing. Jul 2 00:21:21.121237 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 00:21:21.121251 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 00:21:21.121265 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 2 00:21:21.121278 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:21:21.121295 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:21:21.121308 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:21:21.121322 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 2 00:21:21.121336 kernel: signal: max sigframe size: 3632 Jul 2 00:21:21.121350 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:21:21.121364 kernel: rcu: Max phase no-delay instances is 400. Jul 2 00:21:21.121378 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 00:21:21.121391 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:21:21.121405 kernel: smpboot: x86: Booting SMP configuration: Jul 2 00:21:21.121421 kernel: .... node #0, CPUs: #1 Jul 2 00:21:21.121478 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jul 2 00:21:21.121493 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 2 00:21:21.121507 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 00:21:21.121520 kernel: smpboot: Max logical packages: 1 Jul 2 00:21:21.121534 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jul 2 00:21:21.121548 kernel: devtmpfs: initialized Jul 2 00:21:21.121561 kernel: x86/mm: Memory block size: 128MB Jul 2 00:21:21.121577 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jul 2 00:21:21.121591 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:21:21.121605 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 00:21:21.121618 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:21:21.121629 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:21:21.121642 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:21:21.121656 kernel: audit: type=2000 audit(1719879679.028:1): state=initialized audit_enabled=0 res=1 Jul 2 00:21:21.121669 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:21:21.121682 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 00:21:21.121712 kernel: cpuidle: using governor menu Jul 2 00:21:21.121729 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:21:21.121742 kernel: dca service started, version 1.12.1 Jul 2 00:21:21.121753 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jul 2 00:21:21.121767 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 00:21:21.121781 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 00:21:21.121794 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 00:21:21.121806 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:21:21.121824 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 00:21:21.121842 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:21:21.121855 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:21:21.121870 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:21:21.121884 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:21:21.121899 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:21:21.121913 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 2 00:21:21.121928 kernel: ACPI: Interpreter enabled Jul 2 00:21:21.121942 kernel: ACPI: PM: (supports S0 S5) Jul 2 00:21:21.121956 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 00:21:21.121974 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 00:21:21.121989 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 2 00:21:21.122003 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jul 2 00:21:21.122018 kernel: iommu: Default domain type: Translated Jul 2 00:21:21.122033 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 00:21:21.122047 kernel: efivars: Registered efivars operations Jul 2 00:21:21.122061 kernel: PCI: Using ACPI for IRQ routing Jul 2 00:21:21.122076 kernel: PCI: System does not support PCI Jul 2 00:21:21.122090 kernel: vgaarb: loaded Jul 2 00:21:21.122107 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jul 2 00:21:21.122122 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:21:21.122137 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:21:21.122151 kernel: pnp: PnP ACPI init Jul 2 00:21:21.122164 kernel: pnp: PnP ACPI: found 3 devices Jul 2 00:21:21.122178 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 00:21:21.122193 kernel: NET: Registered PF_INET protocol family Jul 2 00:21:21.122207 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 00:21:21.122222 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 2 00:21:21.122239 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:21:21.122252 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 00:21:21.122267 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 2 00:21:21.122280 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 2 00:21:21.122294 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 2 00:21:21.122308 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 2 00:21:21.122322 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:21:21.122335 kernel: NET: Registered PF_XDP protocol family Jul 2 00:21:21.122350 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:21:21.122366 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 2 00:21:21.122380 kernel: software IO TLB: mapped [mem 0x000000003ae73000-0x000000003ee73000] (64MB) Jul 2 00:21:21.122394 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 00:21:21.122408 kernel: Initialise system trusted keyrings Jul 2 00:21:21.122422 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 2 00:21:21.122448 kernel: Key type asymmetric registered Jul 2 00:21:21.122461 kernel: Asymmetric key parser 'x509' registered Jul 2 00:21:21.122474 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 2 00:21:21.122493 kernel: io scheduler mq-deadline registered Jul 2 00:21:21.122509 kernel: io scheduler kyber registered Jul 2 00:21:21.122520 kernel: io scheduler bfq registered Jul 2 00:21:21.122532 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 00:21:21.122543 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:21:21.122556 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 00:21:21.122567 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 2 00:21:21.122580 kernel: i8042: PNP: No PS/2 controller found. Jul 2 00:21:21.122779 kernel: rtc_cmos 00:02: registered as rtc0 Jul 2 00:21:21.122907 kernel: rtc_cmos 00:02: setting system clock to 2024-07-02T00:21:20 UTC (1719879680) Jul 2 00:21:21.123021 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jul 2 00:21:21.123038 kernel: intel_pstate: CPU model not supported Jul 2 00:21:21.123051 kernel: efifb: probing for efifb Jul 2 00:21:21.123063 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 2 00:21:21.123076 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 2 00:21:21.123088 kernel: efifb: scrolling: redraw Jul 2 00:21:21.123099 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 2 00:21:21.123116 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 00:21:21.123131 kernel: fb0: EFI VGA frame buffer device Jul 2 00:21:21.123144 kernel: pstore: Using crash dump compression: deflate Jul 2 00:21:21.123157 kernel: pstore: Registered efi_pstore as persistent store backend Jul 2 00:21:21.123169 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:21:21.123453 kernel: Segment Routing with IPv6 Jul 2 00:21:21.123469 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:21:21.123477 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:21:21.123489 kernel: Key type dns_resolver registered Jul 2 00:21:21.123497 kernel: IPI shorthand broadcast: enabled Jul 2 00:21:21.123512 kernel: sched_clock: Marking stable (890002800, 50725800)->(1167793200, -227064600) Jul 2 00:21:21.123520 kernel: registered taskstats version 1 Jul 2 00:21:21.123531 kernel: Loading compiled-in X.509 certificates Jul 2 00:21:21.123540 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: be1ede902d88b56c26cc000ff22391c78349d771' Jul 2 00:21:21.123548 kernel: Key type .fscrypt registered Jul 2 00:21:21.123559 kernel: Key type fscrypt-provisioning registered Jul 2 00:21:21.123567 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:21:21.123578 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:21:21.123589 kernel: ima: No architecture policies found Jul 2 00:21:21.123600 kernel: clk: Disabling unused clocks Jul 2 00:21:21.123609 kernel: Freeing unused kernel image (initmem) memory: 49328K Jul 2 00:21:21.123620 kernel: Write protecting the kernel read-only data: 36864k Jul 2 00:21:21.123628 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Jul 2 00:21:21.123637 kernel: Run /init as init process Jul 2 00:21:21.123647 kernel: with arguments: Jul 2 00:21:21.123658 kernel: /init Jul 2 00:21:21.123666 kernel: with environment: Jul 2 00:21:21.123679 kernel: HOME=/ Jul 2 00:21:21.123687 kernel: TERM=linux Jul 2 00:21:21.123697 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:21:21.123709 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:21:21.123722 systemd[1]: Detected virtualization microsoft. Jul 2 00:21:21.123731 systemd[1]: Detected architecture x86-64. Jul 2 00:21:21.123742 systemd[1]: Running in initrd. Jul 2 00:21:21.123751 systemd[1]: No hostname configured, using default hostname. Jul 2 00:21:21.123765 systemd[1]: Hostname set to . Jul 2 00:21:21.123774 systemd[1]: Initializing machine ID from random generator. Jul 2 00:21:21.123785 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:21:21.123794 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:21:21.123804 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:21:21.123815 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 00:21:21.123825 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:21:21.123835 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 00:21:21.123846 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 00:21:21.123856 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 00:21:21.123865 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 00:21:21.123873 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:21:21.123884 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:21:21.123894 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:21:21.123902 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:21:21.123916 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:21:21.123924 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:21:21.123936 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:21:21.123944 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:21:21.123956 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:21:21.123964 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:21:21.123976 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:21:21.123985 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:21:21.123999 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:21:21.124008 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:21:21.124019 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 00:21:21.124028 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:21:21.124039 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 00:21:21.124049 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:21:21.124060 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:21:21.124069 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:21:21.124080 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:21:21.124092 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 00:21:21.124128 systemd-journald[176]: Collecting audit messages is disabled. Jul 2 00:21:21.124151 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:21:21.124163 systemd-journald[176]: Journal started Jul 2 00:21:21.124192 systemd-journald[176]: Runtime Journal (/run/log/journal/10802c69ce514bdc9fde311e4d21ee36) is 8.0M, max 158.8M, 150.8M free. Jul 2 00:21:21.119962 systemd-modules-load[177]: Inserted module 'overlay' Jul 2 00:21:21.138447 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:21:21.140381 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:21:21.161685 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:21:21.182261 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:21:21.168641 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:21:21.178128 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:21:21.191770 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:21:21.207269 systemd-modules-load[177]: Inserted module 'br_netfilter' Jul 2 00:21:21.210326 kernel: Bridge firewalling registered Jul 2 00:21:21.210657 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:21:21.222578 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:21:21.229538 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:21:21.240086 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:21:21.246943 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:21:21.253853 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 00:21:21.264613 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:21:21.271310 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:21:21.282179 dracut-cmdline[207]: dracut-dracut-053 Jul 2 00:21:21.287186 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:21:21.304079 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:21:21.317097 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:21:21.358577 systemd-resolved[243]: Positive Trust Anchors: Jul 2 00:21:21.358597 systemd-resolved[243]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:21:21.358648 systemd-resolved[243]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:21:21.386149 systemd-resolved[243]: Defaulting to hostname 'linux'. Jul 2 00:21:21.387371 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:21:21.393037 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:21:21.405450 kernel: SCSI subsystem initialized Jul 2 00:21:21.418450 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:21:21.432462 kernel: iscsi: registered transport (tcp) Jul 2 00:21:21.458412 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:21:21.458543 kernel: QLogic iSCSI HBA Driver Jul 2 00:21:21.494445 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 00:21:21.503646 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 00:21:21.537416 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:21:21.537523 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:21:21.540930 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 00:21:21.585456 kernel: raid6: avx512x4 gen() 18361 MB/s Jul 2 00:21:21.604443 kernel: raid6: avx512x2 gen() 18452 MB/s Jul 2 00:21:21.623443 kernel: raid6: avx512x1 gen() 18392 MB/s Jul 2 00:21:21.643446 kernel: raid6: avx2x4 gen() 18418 MB/s Jul 2 00:21:21.662445 kernel: raid6: avx2x2 gen() 18279 MB/s Jul 2 00:21:21.682442 kernel: raid6: avx2x1 gen() 13675 MB/s Jul 2 00:21:21.682488 kernel: raid6: using algorithm avx512x2 gen() 18452 MB/s Jul 2 00:21:21.704452 kernel: raid6: .... xor() 30498 MB/s, rmw enabled Jul 2 00:21:21.704486 kernel: raid6: using avx512x2 recovery algorithm Jul 2 00:21:21.731460 kernel: xor: automatically using best checksumming function avx Jul 2 00:21:21.902458 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 00:21:21.912636 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:21:21.923609 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:21:21.953713 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jul 2 00:21:21.958202 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:21:21.973590 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 00:21:21.986935 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Jul 2 00:21:22.014989 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:21:22.022767 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:21:22.066516 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:21:22.081679 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 00:21:22.113314 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 00:21:22.121035 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:21:22.129138 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:21:22.132863 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:21:22.148027 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 00:21:22.149654 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 00:21:22.179484 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 00:21:22.179885 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:21:22.194180 kernel: AES CTR mode by8 optimization enabled Jul 2 00:21:22.198592 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:21:22.199023 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:21:22.224529 kernel: hv_vmbus: Vmbus version:5.2 Jul 2 00:21:22.208810 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:21:22.216952 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:21:22.217267 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:21:22.220655 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:21:22.240848 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:21:22.250110 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:21:22.250226 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:21:22.271048 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 00:21:22.271112 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 00:21:22.270797 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:21:22.296569 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 2 00:21:22.296458 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:21:22.306727 kernel: PTP clock support registered Jul 2 00:21:22.313865 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:21:22.321009 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 2 00:21:22.330468 kernel: hv_vmbus: registering driver hv_netvsc Jul 2 00:21:22.349449 kernel: hv_utils: Registering HyperV Utility Driver Jul 2 00:21:22.349547 kernel: hv_vmbus: registering driver hv_utils Jul 2 00:21:22.352460 kernel: hv_vmbus: registering driver hv_storvsc Jul 2 00:21:22.358247 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:21:22.374073 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 00:21:22.374119 kernel: scsi host1: storvsc_host_t Jul 2 00:21:22.374318 kernel: scsi host0: storvsc_host_t Jul 2 00:21:22.374444 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 2 00:21:22.374473 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 2 00:21:22.374491 kernel: hv_utils: Shutdown IC version 3.2 Jul 2 00:21:22.377509 kernel: hv_utils: Heartbeat IC version 3.0 Jul 2 00:21:22.380453 kernel: hv_utils: TimeSync IC version 4.0 Jul 2 00:21:22.872503 systemd-resolved[243]: Clock change detected. Flushing caches. Jul 2 00:21:22.889210 kernel: hv_vmbus: registering driver hid_hyperv Jul 2 00:21:22.896684 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 2 00:21:22.896728 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 2 00:21:22.910934 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 2 00:21:22.913811 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 00:21:22.913833 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 2 00:21:22.923380 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 2 00:21:22.938708 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 2 00:21:22.938917 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 2 00:21:22.939093 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 2 00:21:22.939261 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 2 00:21:22.939432 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 00:21:22.939453 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 2 00:21:23.049302 kernel: hv_netvsc 002248a3-3dea-0022-48a3-3dea002248a3 eth0: VF slot 1 added Jul 2 00:21:23.058687 kernel: hv_vmbus: registering driver hv_pci Jul 2 00:21:23.062710 kernel: hv_pci 313b2619-ab68-4a69-ba0c-511fe785d34d: PCI VMBus probing: Using version 0x10004 Jul 2 00:21:23.106255 kernel: hv_pci 313b2619-ab68-4a69-ba0c-511fe785d34d: PCI host bridge to bus ab68:00 Jul 2 00:21:23.106725 kernel: pci_bus ab68:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jul 2 00:21:23.106899 kernel: pci_bus ab68:00: No busn resource found for root bus, will use [bus 00-ff] Jul 2 00:21:23.107045 kernel: pci ab68:00:02.0: [15b3:1016] type 00 class 0x020000 Jul 2 00:21:23.107247 kernel: pci ab68:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jul 2 00:21:23.107418 kernel: pci ab68:00:02.0: enabling Extended Tags Jul 2 00:21:23.107610 kernel: pci ab68:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ab68:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 2 00:21:23.107799 kernel: pci_bus ab68:00: busn_res: [bus 00-ff] end is updated to 00 Jul 2 00:21:23.107947 kernel: pci ab68:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jul 2 00:21:23.305498 kernel: mlx5_core ab68:00:02.0: enabling device (0000 -> 0002) Jul 2 00:21:23.558549 kernel: mlx5_core ab68:00:02.0: firmware version: 14.30.1284 Jul 2 00:21:23.558805 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (446) Jul 2 00:21:23.558830 kernel: BTRFS: device fsid 2fd636b8-f582-46f8-bde2-15e56e3958c1 devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (449) Jul 2 00:21:23.558851 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 00:21:23.558871 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 00:21:23.558890 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 00:21:23.558910 kernel: hv_netvsc 002248a3-3dea-0022-48a3-3dea002248a3 eth0: VF registering: eth1 Jul 2 00:21:23.559092 kernel: mlx5_core ab68:00:02.0 eth1: joined to eth0 Jul 2 00:21:23.559282 kernel: mlx5_core ab68:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 2 00:21:23.350889 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 2 00:21:23.407881 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 2 00:21:23.570473 kernel: mlx5_core ab68:00:02.0 enP43880s1: renamed from eth1 Jul 2 00:21:23.445727 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 2 00:21:23.461901 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 2 00:21:23.465715 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 2 00:21:23.474837 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 00:21:24.506690 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 00:21:24.507256 disk-uuid[600]: The operation has completed successfully. Jul 2 00:21:24.602607 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:21:24.602731 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 00:21:24.625832 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 00:21:24.632314 sh[715]: Success Jul 2 00:21:24.661775 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 00:21:24.821157 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 00:21:24.830786 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 00:21:24.837938 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 00:21:24.851672 kernel: BTRFS info (device dm-0): first mount of filesystem 2fd636b8-f582-46f8-bde2-15e56e3958c1 Jul 2 00:21:24.851717 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:21:24.857613 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 00:21:24.860507 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 00:21:24.862994 kernel: BTRFS info (device dm-0): using free space tree Jul 2 00:21:25.190834 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 00:21:25.196895 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 00:21:25.206832 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 00:21:25.217979 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 00:21:25.234073 kernel: BTRFS info (device sda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:21:25.234105 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:21:25.234134 kernel: BTRFS info (device sda6): using free space tree Jul 2 00:21:25.270143 kernel: BTRFS info (device sda6): auto enabling async discard Jul 2 00:21:25.280638 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:21:25.287997 kernel: BTRFS info (device sda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:21:25.293685 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 00:21:25.303895 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 00:21:25.328590 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:21:25.343828 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:21:25.364426 systemd-networkd[899]: lo: Link UP Jul 2 00:21:25.364437 systemd-networkd[899]: lo: Gained carrier Jul 2 00:21:25.366550 systemd-networkd[899]: Enumeration completed Jul 2 00:21:25.367040 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:21:25.369499 systemd-networkd[899]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:21:25.369504 systemd-networkd[899]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:21:25.371974 systemd[1]: Reached target network.target - Network. Jul 2 00:21:25.438683 kernel: mlx5_core ab68:00:02.0 enP43880s1: Link up Jul 2 00:21:25.474684 kernel: hv_netvsc 002248a3-3dea-0022-48a3-3dea002248a3 eth0: Data path switched to VF: enP43880s1 Jul 2 00:21:25.474844 systemd-networkd[899]: enP43880s1: Link UP Jul 2 00:21:25.474991 systemd-networkd[899]: eth0: Link UP Jul 2 00:21:25.475156 systemd-networkd[899]: eth0: Gained carrier Jul 2 00:21:25.475166 systemd-networkd[899]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:21:25.487911 systemd-networkd[899]: enP43880s1: Gained carrier Jul 2 00:21:25.516718 systemd-networkd[899]: eth0: DHCPv4 address 10.200.8.10/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 2 00:21:26.333013 ignition[856]: Ignition 2.18.0 Jul 2 00:21:26.333026 ignition[856]: Stage: fetch-offline Jul 2 00:21:26.335021 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:21:26.333086 ignition[856]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:21:26.350869 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 00:21:26.333097 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:21:26.333277 ignition[856]: parsed url from cmdline: "" Jul 2 00:21:26.333282 ignition[856]: no config URL provided Jul 2 00:21:26.333290 ignition[856]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:21:26.333303 ignition[856]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:21:26.333315 ignition[856]: failed to fetch config: resource requires networking Jul 2 00:21:26.333575 ignition[856]: Ignition finished successfully Jul 2 00:21:26.366280 ignition[909]: Ignition 2.18.0 Jul 2 00:21:26.366286 ignition[909]: Stage: fetch Jul 2 00:21:26.366477 ignition[909]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:21:26.366488 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:21:26.366581 ignition[909]: parsed url from cmdline: "" Jul 2 00:21:26.366584 ignition[909]: no config URL provided Jul 2 00:21:26.366589 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:21:26.366596 ignition[909]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:21:26.366618 ignition[909]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 2 00:21:26.460383 ignition[909]: GET result: OK Jul 2 00:21:26.460556 ignition[909]: config has been read from IMDS userdata Jul 2 00:21:26.460592 ignition[909]: parsing config with SHA512: 71e9769845c406b26511fa4c7cf2f4059dd68824196aeaf5148a0affd674fcc279ec6055b268c4497178694db3cd23bc4d7928f2c1fa1efffb443b366c0d29be Jul 2 00:21:26.467943 unknown[909]: fetched base config from "system" Jul 2 00:21:26.467964 unknown[909]: fetched base config from "system" Jul 2 00:21:26.468854 ignition[909]: fetch: fetch complete Jul 2 00:21:26.467975 unknown[909]: fetched user config from "azure" Jul 2 00:21:26.468862 ignition[909]: fetch: fetch passed Jul 2 00:21:26.470530 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 00:21:26.468908 ignition[909]: Ignition finished successfully Jul 2 00:21:26.486939 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 00:21:26.503419 ignition[916]: Ignition 2.18.0 Jul 2 00:21:26.503432 ignition[916]: Stage: kargs Jul 2 00:21:26.503647 ignition[916]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:21:26.503694 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:21:26.507864 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 00:21:26.504567 ignition[916]: kargs: kargs passed Jul 2 00:21:26.504614 ignition[916]: Ignition finished successfully Jul 2 00:21:26.524768 systemd-networkd[899]: enP43880s1: Gained IPv6LL Jul 2 00:21:26.525146 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 00:21:26.540058 ignition[923]: Ignition 2.18.0 Jul 2 00:21:26.540069 ignition[923]: Stage: disks Jul 2 00:21:26.542693 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 00:21:26.540288 ignition[923]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:21:26.546688 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 00:21:26.540302 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:21:26.541243 ignition[923]: disks: disks passed Jul 2 00:21:26.541288 ignition[923]: Ignition finished successfully Jul 2 00:21:26.564616 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:21:26.571869 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:21:26.571974 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:21:26.572435 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:21:26.588987 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 00:21:26.643389 systemd-fsck[932]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 2 00:21:26.650125 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 00:21:26.665761 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 00:21:26.770002 kernel: EXT4-fs (sda9): mounted filesystem c5a17c06-b440-4aab-a0fa-5b60bb1d8586 r/w with ordered data mode. Quota mode: none. Jul 2 00:21:26.770598 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 00:21:26.773508 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 00:21:26.915762 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:21:26.922053 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 00:21:26.930913 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 2 00:21:26.946899 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (943) Jul 2 00:21:26.946926 kernel: BTRFS info (device sda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:21:26.946940 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:21:26.934416 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:21:26.955172 kernel: BTRFS info (device sda6): using free space tree Jul 2 00:21:26.934453 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:21:26.947745 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 00:21:26.967156 kernel: BTRFS info (device sda6): auto enabling async discard Jul 2 00:21:26.973815 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 00:21:26.980878 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:21:27.163877 systemd-networkd[899]: eth0: Gained IPv6LL Jul 2 00:21:27.527533 coreos-metadata[945]: Jul 02 00:21:27.527 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 2 00:21:27.534379 coreos-metadata[945]: Jul 02 00:21:27.534 INFO Fetch successful Jul 2 00:21:27.537316 coreos-metadata[945]: Jul 02 00:21:27.534 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 2 00:21:27.553648 coreos-metadata[945]: Jul 02 00:21:27.553 INFO Fetch successful Jul 2 00:21:27.556429 coreos-metadata[945]: Jul 02 00:21:27.553 INFO wrote hostname ci-3975.1.1-a-106c6d4ee2 to /sysroot/etc/hostname Jul 2 00:21:27.562518 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 00:21:27.709693 initrd-setup-root[972]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:21:27.737525 initrd-setup-root[979]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:21:27.743260 initrd-setup-root[986]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:21:27.748216 initrd-setup-root[993]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:21:28.244715 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 00:21:28.256796 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 00:21:28.262285 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 00:21:28.276209 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 00:21:28.282602 kernel: BTRFS info (device sda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:21:28.338717 ignition[1064]: INFO : Ignition 2.18.0 Jul 2 00:21:28.338717 ignition[1064]: INFO : Stage: mount Jul 2 00:21:28.338717 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:21:28.338717 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:21:28.359945 ignition[1064]: INFO : mount: mount passed Jul 2 00:21:28.359945 ignition[1064]: INFO : Ignition finished successfully Jul 2 00:21:28.339091 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 00:21:28.344220 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 00:21:28.366615 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 00:21:28.377544 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:21:28.402684 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1077) Jul 2 00:21:28.402735 kernel: BTRFS info (device sda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:21:28.406673 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:21:28.411019 kernel: BTRFS info (device sda6): using free space tree Jul 2 00:21:28.416679 kernel: BTRFS info (device sda6): auto enabling async discard Jul 2 00:21:28.418892 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:21:28.445962 ignition[1093]: INFO : Ignition 2.18.0 Jul 2 00:21:28.445962 ignition[1093]: INFO : Stage: files Jul 2 00:21:28.450924 ignition[1093]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:21:28.450924 ignition[1093]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:21:28.450924 ignition[1093]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:21:28.459464 ignition[1093]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:21:28.459464 ignition[1093]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:21:28.573164 ignition[1093]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:21:28.580741 ignition[1093]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:21:28.580741 ignition[1093]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:21:28.580741 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 00:21:28.580741 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 00:21:28.580741 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:21:28.580741 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 00:21:28.573708 unknown[1093]: wrote ssh authorized keys file for user: core Jul 2 00:21:28.679542 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 00:21:28.773001 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:21:28.778934 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:21:28.778934 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:21:28.778934 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:21:28.778934 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:21:28.778934 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:21:28.778934 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:21:28.778934 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:21:28.778934 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:21:28.778934 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:21:28.778934 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:21:28.778934 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:21:28.778934 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:21:28.778934 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:21:28.778934 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jul 2 00:21:29.471813 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 2 00:21:31.328958 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:21:31.328958 ignition[1093]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 2 00:21:31.340607 ignition[1093]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 00:21:31.346956 ignition[1093]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 00:21:31.346956 ignition[1093]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 2 00:21:31.346956 ignition[1093]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 2 00:21:31.354195 ignition[1093]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:21:31.354195 ignition[1093]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:21:31.354195 ignition[1093]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 2 00:21:31.354195 ignition[1093]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:21:31.354195 ignition[1093]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:21:31.354195 ignition[1093]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:21:31.354195 ignition[1093]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:21:31.354195 ignition[1093]: INFO : files: files passed Jul 2 00:21:31.354195 ignition[1093]: INFO : Ignition finished successfully Jul 2 00:21:31.348696 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 00:21:31.396984 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 00:21:31.408982 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 00:21:31.463932 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:21:31.464109 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 00:21:31.475612 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:21:31.480140 initrd-setup-root-after-ignition[1127]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:21:31.479803 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:21:31.491020 initrd-setup-root-after-ignition[1123]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:21:31.496157 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 00:21:31.506886 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 00:21:31.544158 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:21:31.544279 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 00:21:31.550021 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 00:21:31.558011 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 00:21:31.558172 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 00:21:31.560572 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 00:21:31.578755 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:21:31.591815 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 00:21:31.604080 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:21:31.604303 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:21:31.605380 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 00:21:31.605800 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:21:31.605937 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:21:31.606697 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 00:21:31.607160 systemd[1]: Stopped target basic.target - Basic System. Jul 2 00:21:31.607597 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 00:21:31.608059 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:21:31.608507 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 00:21:31.608936 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 00:21:31.609375 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:21:31.610335 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 00:21:31.610788 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 00:21:31.611228 systemd[1]: Stopped target swap.target - Swaps. Jul 2 00:21:31.611639 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:21:31.611781 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:21:31.612547 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:21:31.613101 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:21:31.613503 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 00:21:31.659737 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:21:31.663203 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:21:31.663367 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 00:21:31.722745 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:21:31.722956 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:21:31.735009 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:21:31.735208 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 00:21:31.740330 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 2 00:21:31.745705 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 00:21:31.754160 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 00:21:31.756545 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:21:31.758762 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:21:31.765893 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 00:21:31.773574 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:21:31.773780 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:21:31.777266 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:21:31.777421 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:21:31.786135 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:21:31.803383 ignition[1147]: INFO : Ignition 2.18.0 Jul 2 00:21:31.803383 ignition[1147]: INFO : Stage: umount Jul 2 00:21:31.803383 ignition[1147]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:21:31.803383 ignition[1147]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:21:31.803383 ignition[1147]: INFO : umount: umount passed Jul 2 00:21:31.803383 ignition[1147]: INFO : Ignition finished successfully Jul 2 00:21:31.786405 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 00:21:31.797564 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:21:31.797738 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 00:21:31.805129 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:21:31.805235 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 00:21:31.808372 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:21:31.808419 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 00:21:31.812787 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 00:21:31.812836 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 00:21:31.815477 systemd[1]: Stopped target network.target - Network. Jul 2 00:21:31.820518 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:21:31.823794 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:21:31.831307 systemd[1]: Stopped target paths.target - Path Units. Jul 2 00:21:31.869693 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:21:31.872959 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:21:31.880420 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 00:21:31.882910 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 00:21:31.887868 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:21:31.890266 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:21:31.892946 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:21:31.892988 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:21:31.893510 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:21:31.893555 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 00:21:31.894410 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 00:21:31.894446 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 00:21:31.894997 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 00:21:31.895349 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 00:21:31.896920 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:21:31.897418 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:21:31.897495 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 00:21:31.898475 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:21:31.898568 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 00:21:31.920775 systemd-networkd[899]: eth0: DHCPv6 lease lost Jul 2 00:21:31.923939 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:21:31.924049 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 00:21:31.961475 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:21:31.961609 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 00:21:31.969797 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:21:31.969864 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:21:31.985772 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 00:21:31.991050 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:21:31.991122 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:21:32.000700 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:21:32.000763 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:21:32.008625 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:21:32.008697 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 00:21:32.017267 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 00:21:32.017331 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:21:32.026744 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:21:32.046937 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:21:32.047097 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:21:32.053624 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:21:32.053717 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 00:21:32.065318 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:21:32.065367 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:21:32.073709 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:21:32.073779 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:21:32.086265 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:21:32.088887 kernel: hv_netvsc 002248a3-3dea-0022-48a3-3dea002248a3 eth0: Data path switched from VF: enP43880s1 Jul 2 00:21:32.086332 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 00:21:32.092467 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:21:32.092516 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:21:32.106906 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 00:21:32.109941 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:21:32.109998 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:21:32.113465 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:21:32.113515 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:21:32.131392 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:21:32.131529 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 00:21:32.136902 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:21:32.136989 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 00:21:32.146252 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 00:21:32.161853 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 00:21:32.258536 systemd[1]: Switching root. Jul 2 00:21:32.291242 systemd-journald[176]: Journal stopped Jul 2 00:21:21.118691 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 22:47:51 -00 2024 Jul 2 00:21:21.118727 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:21:21.118742 kernel: BIOS-provided physical RAM map: Jul 2 00:21:21.118753 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 2 00:21:21.118763 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jul 2 00:21:21.118773 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jul 2 00:21:21.118787 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jul 2 00:21:21.118801 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jul 2 00:21:21.118812 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jul 2 00:21:21.118823 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jul 2 00:21:21.118834 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jul 2 00:21:21.118845 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jul 2 00:21:21.118856 kernel: printk: bootconsole [earlyser0] enabled Jul 2 00:21:21.118868 kernel: NX (Execute Disable) protection: active Jul 2 00:21:21.118884 kernel: APIC: Static calls initialized Jul 2 00:21:21.118897 kernel: efi: EFI v2.7 by Microsoft Jul 2 00:21:21.118909 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee73a98 Jul 2 00:21:21.118922 kernel: SMBIOS 3.1.0 present. Jul 2 00:21:21.118934 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jul 2 00:21:21.118946 kernel: Hypervisor detected: Microsoft Hyper-V Jul 2 00:21:21.118959 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jul 2 00:21:21.118971 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jul 2 00:21:21.118983 kernel: Hyper-V: Nested features: 0x1e0101 Jul 2 00:21:21.118995 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jul 2 00:21:21.119010 kernel: Hyper-V: Using hypercall for remote TLB flush Jul 2 00:21:21.119023 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 2 00:21:21.119035 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 2 00:21:21.119048 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jul 2 00:21:21.119061 kernel: tsc: Detected 2593.906 MHz processor Jul 2 00:21:21.119074 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 00:21:21.119087 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 00:21:21.119099 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jul 2 00:21:21.119112 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 2 00:21:21.119127 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 00:21:21.119139 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jul 2 00:21:21.119152 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jul 2 00:21:21.119164 kernel: Using GB pages for direct mapping Jul 2 00:21:21.119176 kernel: Secure boot disabled Jul 2 00:21:21.119189 kernel: ACPI: Early table checksum verification disabled Jul 2 00:21:21.119202 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jul 2 00:21:21.119219 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:21:21.119236 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:21:21.119249 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jul 2 00:21:21.119262 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jul 2 00:21:21.119276 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:21:21.119290 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:21:21.119303 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:21:21.119319 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:21:21.119332 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:21:21.119345 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:21:21.119359 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:21:21.119372 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jul 2 00:21:21.119386 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jul 2 00:21:21.119399 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jul 2 00:21:21.119413 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jul 2 00:21:21.119428 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jul 2 00:21:21.119449 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jul 2 00:21:21.119462 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jul 2 00:21:21.119475 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jul 2 00:21:21.119489 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jul 2 00:21:21.119502 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jul 2 00:21:21.119515 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 00:21:21.119528 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 2 00:21:21.119542 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jul 2 00:21:21.119558 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jul 2 00:21:21.119571 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jul 2 00:21:21.119585 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jul 2 00:21:21.119598 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jul 2 00:21:21.119617 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jul 2 00:21:21.119630 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jul 2 00:21:21.119642 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jul 2 00:21:21.119662 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jul 2 00:21:21.119688 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jul 2 00:21:21.119717 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jul 2 00:21:21.119729 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jul 2 00:21:21.119742 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jul 2 00:21:21.119755 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jul 2 00:21:21.119768 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jul 2 00:21:21.119782 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jul 2 00:21:21.119794 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jul 2 00:21:21.119805 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jul 2 00:21:21.119818 kernel: Zone ranges: Jul 2 00:21:21.119834 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 00:21:21.119847 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 2 00:21:21.119858 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jul 2 00:21:21.119870 kernel: Movable zone start for each node Jul 2 00:21:21.119882 kernel: Early memory node ranges Jul 2 00:21:21.119894 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 2 00:21:21.119906 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jul 2 00:21:21.119916 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jul 2 00:21:21.119937 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jul 2 00:21:21.119954 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jul 2 00:21:21.119965 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 00:21:21.119976 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 2 00:21:21.119987 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jul 2 00:21:21.119999 kernel: ACPI: PM-Timer IO Port: 0x408 Jul 2 00:21:21.120012 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jul 2 00:21:21.120023 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jul 2 00:21:21.120035 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 00:21:21.120047 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 00:21:21.120063 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jul 2 00:21:21.120076 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 00:21:21.120090 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jul 2 00:21:21.120101 kernel: Booting paravirtualized kernel on Hyper-V Jul 2 00:21:21.120113 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 00:21:21.120126 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 2 00:21:21.120140 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jul 2 00:21:21.120154 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jul 2 00:21:21.120167 kernel: pcpu-alloc: [0] 0 1 Jul 2 00:21:21.120183 kernel: Hyper-V: PV spinlocks enabled Jul 2 00:21:21.120197 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 00:21:21.120213 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:21:21.120227 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:21:21.120240 kernel: random: crng init done Jul 2 00:21:21.120253 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 2 00:21:21.120267 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 00:21:21.120280 kernel: Fallback order for Node 0: 0 Jul 2 00:21:21.120296 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jul 2 00:21:21.120320 kernel: Policy zone: Normal Jul 2 00:21:21.120337 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:21:21.120351 kernel: software IO TLB: area num 2. Jul 2 00:21:21.120366 kernel: Memory: 8070932K/8387460K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49328K init, 2016K bss, 316268K reserved, 0K cma-reserved) Jul 2 00:21:21.120380 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 00:21:21.120395 kernel: ftrace: allocating 37658 entries in 148 pages Jul 2 00:21:21.120409 kernel: ftrace: allocated 148 pages with 3 groups Jul 2 00:21:21.120423 kernel: Dynamic Preempt: voluntary Jul 2 00:21:21.120463 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:21:21.120483 kernel: rcu: RCU event tracing is enabled. Jul 2 00:21:21.120501 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 00:21:21.120516 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:21:21.120530 kernel: Rude variant of Tasks RCU enabled. Jul 2 00:21:21.120545 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:21:21.120559 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:21:21.120577 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 00:21:21.120591 kernel: Using NULL legacy PIC Jul 2 00:21:21.120606 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jul 2 00:21:21.120620 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 00:21:21.120635 kernel: Console: colour dummy device 80x25 Jul 2 00:21:21.120649 kernel: printk: console [tty1] enabled Jul 2 00:21:21.120664 kernel: printk: console [ttyS0] enabled Jul 2 00:21:21.120677 kernel: printk: bootconsole [earlyser0] disabled Jul 2 00:21:21.120690 kernel: ACPI: Core revision 20230628 Jul 2 00:21:21.120708 kernel: Failed to register legacy timer interrupt Jul 2 00:21:21.120725 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 00:21:21.120739 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 2 00:21:21.120753 kernel: Hyper-V: Using IPI hypercalls Jul 2 00:21:21.120767 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jul 2 00:21:21.120781 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jul 2 00:21:21.120795 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jul 2 00:21:21.120810 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jul 2 00:21:21.120824 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jul 2 00:21:21.120838 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jul 2 00:21:21.120856 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Jul 2 00:21:21.120870 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 2 00:21:21.120882 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jul 2 00:21:21.120895 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 00:21:21.120908 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 00:21:21.120922 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 00:21:21.120935 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 00:21:21.120949 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 2 00:21:21.120963 kernel: RETBleed: Vulnerable Jul 2 00:21:21.120979 kernel: Speculative Store Bypass: Vulnerable Jul 2 00:21:21.120992 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 00:21:21.121005 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 00:21:21.121019 kernel: GDS: Unknown: Dependent on hypervisor status Jul 2 00:21:21.121033 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 00:21:21.121046 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 00:21:21.121060 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 00:21:21.121073 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 2 00:21:21.121087 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 2 00:21:21.121101 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 2 00:21:21.121114 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 00:21:21.121130 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jul 2 00:21:21.121143 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jul 2 00:21:21.121157 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jul 2 00:21:21.121170 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jul 2 00:21:21.121184 kernel: Freeing SMP alternatives memory: 32K Jul 2 00:21:21.121197 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:21:21.121210 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 00:21:21.121224 kernel: SELinux: Initializing. Jul 2 00:21:21.121237 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 00:21:21.121251 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 00:21:21.121265 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 2 00:21:21.121278 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:21:21.121295 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:21:21.121308 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:21:21.121322 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 2 00:21:21.121336 kernel: signal: max sigframe size: 3632 Jul 2 00:21:21.121350 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:21:21.121364 kernel: rcu: Max phase no-delay instances is 400. Jul 2 00:21:21.121378 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 00:21:21.121391 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:21:21.121405 kernel: smpboot: x86: Booting SMP configuration: Jul 2 00:21:21.121421 kernel: .... node #0, CPUs: #1 Jul 2 00:21:21.121478 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jul 2 00:21:21.121493 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 2 00:21:21.121507 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 00:21:21.121520 kernel: smpboot: Max logical packages: 1 Jul 2 00:21:21.121534 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jul 2 00:21:21.121548 kernel: devtmpfs: initialized Jul 2 00:21:21.121561 kernel: x86/mm: Memory block size: 128MB Jul 2 00:21:21.121577 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jul 2 00:21:21.121591 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:21:21.121605 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 00:21:21.121618 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:21:21.121629 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:21:21.121642 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:21:21.121656 kernel: audit: type=2000 audit(1719879679.028:1): state=initialized audit_enabled=0 res=1 Jul 2 00:21:21.121669 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:21:21.121682 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 00:21:21.121712 kernel: cpuidle: using governor menu Jul 2 00:21:21.121729 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:21:21.121742 kernel: dca service started, version 1.12.1 Jul 2 00:21:21.121753 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jul 2 00:21:21.121767 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 00:21:21.121781 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 00:21:21.121794 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 00:21:21.121806 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:21:21.121824 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 00:21:21.121842 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:21:21.121855 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:21:21.121870 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:21:21.121884 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:21:21.121899 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:21:21.121913 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 2 00:21:21.121928 kernel: ACPI: Interpreter enabled Jul 2 00:21:21.121942 kernel: ACPI: PM: (supports S0 S5) Jul 2 00:21:21.121956 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 00:21:21.121974 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 00:21:21.121989 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 2 00:21:21.122003 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jul 2 00:21:21.122018 kernel: iommu: Default domain type: Translated Jul 2 00:21:21.122033 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 00:21:21.122047 kernel: efivars: Registered efivars operations Jul 2 00:21:21.122061 kernel: PCI: Using ACPI for IRQ routing Jul 2 00:21:21.122076 kernel: PCI: System does not support PCI Jul 2 00:21:21.122090 kernel: vgaarb: loaded Jul 2 00:21:21.122107 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jul 2 00:21:21.122122 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:21:21.122137 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:21:21.122151 kernel: pnp: PnP ACPI init Jul 2 00:21:21.122164 kernel: pnp: PnP ACPI: found 3 devices Jul 2 00:21:21.122178 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 00:21:21.122193 kernel: NET: Registered PF_INET protocol family Jul 2 00:21:21.122207 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 00:21:21.122222 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 2 00:21:21.122239 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:21:21.122252 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 00:21:21.122267 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 2 00:21:21.122280 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 2 00:21:21.122294 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 2 00:21:21.122308 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 2 00:21:21.122322 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:21:21.122335 kernel: NET: Registered PF_XDP protocol family Jul 2 00:21:21.122350 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:21:21.122366 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 2 00:21:21.122380 kernel: software IO TLB: mapped [mem 0x000000003ae73000-0x000000003ee73000] (64MB) Jul 2 00:21:21.122394 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 00:21:21.122408 kernel: Initialise system trusted keyrings Jul 2 00:21:21.122422 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 2 00:21:21.122448 kernel: Key type asymmetric registered Jul 2 00:21:21.122461 kernel: Asymmetric key parser 'x509' registered Jul 2 00:21:21.122474 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 2 00:21:21.122493 kernel: io scheduler mq-deadline registered Jul 2 00:21:21.122509 kernel: io scheduler kyber registered Jul 2 00:21:21.122520 kernel: io scheduler bfq registered Jul 2 00:21:21.122532 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 00:21:21.122543 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:21:21.122556 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 00:21:21.122567 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 2 00:21:21.122580 kernel: i8042: PNP: No PS/2 controller found. Jul 2 00:21:21.122779 kernel: rtc_cmos 00:02: registered as rtc0 Jul 2 00:21:21.122907 kernel: rtc_cmos 00:02: setting system clock to 2024-07-02T00:21:20 UTC (1719879680) Jul 2 00:21:21.123021 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jul 2 00:21:21.123038 kernel: intel_pstate: CPU model not supported Jul 2 00:21:21.123051 kernel: efifb: probing for efifb Jul 2 00:21:21.123063 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 2 00:21:21.123076 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 2 00:21:21.123088 kernel: efifb: scrolling: redraw Jul 2 00:21:21.123099 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 2 00:21:21.123116 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 00:21:21.123131 kernel: fb0: EFI VGA frame buffer device Jul 2 00:21:21.123144 kernel: pstore: Using crash dump compression: deflate Jul 2 00:21:21.123157 kernel: pstore: Registered efi_pstore as persistent store backend Jul 2 00:21:21.123169 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:21:21.123453 kernel: Segment Routing with IPv6 Jul 2 00:21:21.123469 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:21:21.123477 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:21:21.123489 kernel: Key type dns_resolver registered Jul 2 00:21:21.123497 kernel: IPI shorthand broadcast: enabled Jul 2 00:21:21.123512 kernel: sched_clock: Marking stable (890002800, 50725800)->(1167793200, -227064600) Jul 2 00:21:21.123520 kernel: registered taskstats version 1 Jul 2 00:21:21.123531 kernel: Loading compiled-in X.509 certificates Jul 2 00:21:21.123540 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: be1ede902d88b56c26cc000ff22391c78349d771' Jul 2 00:21:21.123548 kernel: Key type .fscrypt registered Jul 2 00:21:21.123559 kernel: Key type fscrypt-provisioning registered Jul 2 00:21:21.123567 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:21:21.123578 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:21:21.123589 kernel: ima: No architecture policies found Jul 2 00:21:21.123600 kernel: clk: Disabling unused clocks Jul 2 00:21:21.123609 kernel: Freeing unused kernel image (initmem) memory: 49328K Jul 2 00:21:21.123620 kernel: Write protecting the kernel read-only data: 36864k Jul 2 00:21:21.123628 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Jul 2 00:21:21.123637 kernel: Run /init as init process Jul 2 00:21:21.123647 kernel: with arguments: Jul 2 00:21:21.123658 kernel: /init Jul 2 00:21:21.123666 kernel: with environment: Jul 2 00:21:21.123679 kernel: HOME=/ Jul 2 00:21:21.123687 kernel: TERM=linux Jul 2 00:21:21.123697 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:21:21.123709 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:21:21.123722 systemd[1]: Detected virtualization microsoft. Jul 2 00:21:21.123731 systemd[1]: Detected architecture x86-64. Jul 2 00:21:21.123742 systemd[1]: Running in initrd. Jul 2 00:21:21.123751 systemd[1]: No hostname configured, using default hostname. Jul 2 00:21:21.123765 systemd[1]: Hostname set to . Jul 2 00:21:21.123774 systemd[1]: Initializing machine ID from random generator. Jul 2 00:21:21.123785 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:21:21.123794 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:21:21.123804 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:21:21.123815 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 00:21:21.123825 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:21:21.123835 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 00:21:21.123846 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 00:21:21.123856 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 00:21:21.123865 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 00:21:21.123873 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:21:21.123884 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:21:21.123894 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:21:21.123902 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:21:21.123916 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:21:21.123924 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:21:21.123936 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:21:21.123944 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:21:21.123956 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:21:21.123964 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:21:21.123976 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:21:21.123985 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:21:21.123999 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:21:21.124008 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:21:21.124019 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 00:21:21.124028 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:21:21.124039 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 00:21:21.124049 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:21:21.124060 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:21:21.124069 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:21:21.124080 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:21:21.124092 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 00:21:21.124128 systemd-journald[176]: Collecting audit messages is disabled. Jul 2 00:21:21.124151 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:21:21.124163 systemd-journald[176]: Journal started Jul 2 00:21:21.124192 systemd-journald[176]: Runtime Journal (/run/log/journal/10802c69ce514bdc9fde311e4d21ee36) is 8.0M, max 158.8M, 150.8M free. Jul 2 00:21:21.119962 systemd-modules-load[177]: Inserted module 'overlay' Jul 2 00:21:21.138447 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:21:21.140381 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:21:21.161685 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:21:21.182261 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:21:21.168641 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:21:21.178128 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:21:21.191770 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:21:21.207269 systemd-modules-load[177]: Inserted module 'br_netfilter' Jul 2 00:21:21.210326 kernel: Bridge firewalling registered Jul 2 00:21:21.210657 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:21:21.222578 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:21:21.229538 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:21:21.240086 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:21:21.246943 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:21:21.253853 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 00:21:21.264613 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:21:21.271310 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:21:21.282179 dracut-cmdline[207]: dracut-dracut-053 Jul 2 00:21:21.287186 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:21:21.304079 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:21:21.317097 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:21:21.358577 systemd-resolved[243]: Positive Trust Anchors: Jul 2 00:21:21.358597 systemd-resolved[243]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:21:21.358648 systemd-resolved[243]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:21:21.386149 systemd-resolved[243]: Defaulting to hostname 'linux'. Jul 2 00:21:21.387371 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:21:21.393037 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:21:21.405450 kernel: SCSI subsystem initialized Jul 2 00:21:21.418450 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:21:21.432462 kernel: iscsi: registered transport (tcp) Jul 2 00:21:21.458412 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:21:21.458543 kernel: QLogic iSCSI HBA Driver Jul 2 00:21:21.494445 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 00:21:21.503646 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 00:21:21.537416 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:21:21.537523 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:21:21.540930 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 00:21:21.585456 kernel: raid6: avx512x4 gen() 18361 MB/s Jul 2 00:21:21.604443 kernel: raid6: avx512x2 gen() 18452 MB/s Jul 2 00:21:21.623443 kernel: raid6: avx512x1 gen() 18392 MB/s Jul 2 00:21:21.643446 kernel: raid6: avx2x4 gen() 18418 MB/s Jul 2 00:21:21.662445 kernel: raid6: avx2x2 gen() 18279 MB/s Jul 2 00:21:21.682442 kernel: raid6: avx2x1 gen() 13675 MB/s Jul 2 00:21:21.682488 kernel: raid6: using algorithm avx512x2 gen() 18452 MB/s Jul 2 00:21:21.704452 kernel: raid6: .... xor() 30498 MB/s, rmw enabled Jul 2 00:21:21.704486 kernel: raid6: using avx512x2 recovery algorithm Jul 2 00:21:21.731460 kernel: xor: automatically using best checksumming function avx Jul 2 00:21:21.902458 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 00:21:21.912636 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:21:21.923609 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:21:21.953713 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jul 2 00:21:21.958202 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:21:21.973590 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 00:21:21.986935 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Jul 2 00:21:22.014989 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:21:22.022767 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:21:22.066516 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:21:22.081679 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 00:21:22.113314 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 00:21:22.121035 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:21:22.129138 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:21:22.132863 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:21:22.148027 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 00:21:22.149654 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 00:21:22.179484 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 00:21:22.179885 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:21:22.194180 kernel: AES CTR mode by8 optimization enabled Jul 2 00:21:22.198592 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:21:22.199023 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:21:22.224529 kernel: hv_vmbus: Vmbus version:5.2 Jul 2 00:21:22.208810 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:21:22.216952 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:21:22.217267 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:21:22.220655 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:21:22.240848 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:21:22.250110 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:21:22.250226 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:21:22.271048 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 00:21:22.271112 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 00:21:22.270797 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:21:22.296569 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 2 00:21:22.296458 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:21:22.306727 kernel: PTP clock support registered Jul 2 00:21:22.313865 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:21:22.321009 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 2 00:21:22.330468 kernel: hv_vmbus: registering driver hv_netvsc Jul 2 00:21:22.349449 kernel: hv_utils: Registering HyperV Utility Driver Jul 2 00:21:22.349547 kernel: hv_vmbus: registering driver hv_utils Jul 2 00:21:22.352460 kernel: hv_vmbus: registering driver hv_storvsc Jul 2 00:21:22.358247 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:21:22.374073 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 00:21:22.374119 kernel: scsi host1: storvsc_host_t Jul 2 00:21:22.374318 kernel: scsi host0: storvsc_host_t Jul 2 00:21:22.374444 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 2 00:21:22.374473 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 2 00:21:22.374491 kernel: hv_utils: Shutdown IC version 3.2 Jul 2 00:21:22.377509 kernel: hv_utils: Heartbeat IC version 3.0 Jul 2 00:21:22.380453 kernel: hv_utils: TimeSync IC version 4.0 Jul 2 00:21:22.872503 systemd-resolved[243]: Clock change detected. Flushing caches. Jul 2 00:21:22.889210 kernel: hv_vmbus: registering driver hid_hyperv Jul 2 00:21:22.896684 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 2 00:21:22.896728 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 2 00:21:22.910934 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 2 00:21:22.913811 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 00:21:22.913833 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 2 00:21:22.923380 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 2 00:21:22.938708 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 2 00:21:22.938917 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 2 00:21:22.939093 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 2 00:21:22.939261 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 2 00:21:22.939432 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 00:21:22.939453 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 2 00:21:23.049302 kernel: hv_netvsc 002248a3-3dea-0022-48a3-3dea002248a3 eth0: VF slot 1 added Jul 2 00:21:23.058687 kernel: hv_vmbus: registering driver hv_pci Jul 2 00:21:23.062710 kernel: hv_pci 313b2619-ab68-4a69-ba0c-511fe785d34d: PCI VMBus probing: Using version 0x10004 Jul 2 00:21:23.106255 kernel: hv_pci 313b2619-ab68-4a69-ba0c-511fe785d34d: PCI host bridge to bus ab68:00 Jul 2 00:21:23.106725 kernel: pci_bus ab68:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jul 2 00:21:23.106899 kernel: pci_bus ab68:00: No busn resource found for root bus, will use [bus 00-ff] Jul 2 00:21:23.107045 kernel: pci ab68:00:02.0: [15b3:1016] type 00 class 0x020000 Jul 2 00:21:23.107247 kernel: pci ab68:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jul 2 00:21:23.107418 kernel: pci ab68:00:02.0: enabling Extended Tags Jul 2 00:21:23.107610 kernel: pci ab68:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ab68:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 2 00:21:23.107799 kernel: pci_bus ab68:00: busn_res: [bus 00-ff] end is updated to 00 Jul 2 00:21:23.107947 kernel: pci ab68:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jul 2 00:21:23.305498 kernel: mlx5_core ab68:00:02.0: enabling device (0000 -> 0002) Jul 2 00:21:23.558549 kernel: mlx5_core ab68:00:02.0: firmware version: 14.30.1284 Jul 2 00:21:23.558805 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (446) Jul 2 00:21:23.558830 kernel: BTRFS: device fsid 2fd636b8-f582-46f8-bde2-15e56e3958c1 devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (449) Jul 2 00:21:23.558851 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 00:21:23.558871 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 00:21:23.558890 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 00:21:23.558910 kernel: hv_netvsc 002248a3-3dea-0022-48a3-3dea002248a3 eth0: VF registering: eth1 Jul 2 00:21:23.559092 kernel: mlx5_core ab68:00:02.0 eth1: joined to eth0 Jul 2 00:21:23.559282 kernel: mlx5_core ab68:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 2 00:21:23.350889 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 2 00:21:23.407881 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 2 00:21:23.570473 kernel: mlx5_core ab68:00:02.0 enP43880s1: renamed from eth1 Jul 2 00:21:23.445727 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 2 00:21:23.461901 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 2 00:21:23.465715 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 2 00:21:23.474837 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 00:21:24.506690 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 00:21:24.507256 disk-uuid[600]: The operation has completed successfully. Jul 2 00:21:24.602607 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:21:24.602731 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 00:21:24.625832 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 00:21:24.632314 sh[715]: Success Jul 2 00:21:24.661775 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 00:21:24.821157 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 00:21:24.830786 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 00:21:24.837938 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 00:21:24.851672 kernel: BTRFS info (device dm-0): first mount of filesystem 2fd636b8-f582-46f8-bde2-15e56e3958c1 Jul 2 00:21:24.851717 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:21:24.857613 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 00:21:24.860507 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 00:21:24.862994 kernel: BTRFS info (device dm-0): using free space tree Jul 2 00:21:25.190834 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 00:21:25.196895 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 00:21:25.206832 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 00:21:25.217979 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 00:21:25.234073 kernel: BTRFS info (device sda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:21:25.234105 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:21:25.234134 kernel: BTRFS info (device sda6): using free space tree Jul 2 00:21:25.270143 kernel: BTRFS info (device sda6): auto enabling async discard Jul 2 00:21:25.280638 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:21:25.287997 kernel: BTRFS info (device sda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:21:25.293685 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 00:21:25.303895 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 00:21:25.328590 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:21:25.343828 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:21:25.364426 systemd-networkd[899]: lo: Link UP Jul 2 00:21:25.364437 systemd-networkd[899]: lo: Gained carrier Jul 2 00:21:25.366550 systemd-networkd[899]: Enumeration completed Jul 2 00:21:25.367040 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:21:25.369499 systemd-networkd[899]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:21:25.369504 systemd-networkd[899]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:21:25.371974 systemd[1]: Reached target network.target - Network. Jul 2 00:21:25.438683 kernel: mlx5_core ab68:00:02.0 enP43880s1: Link up Jul 2 00:21:25.474684 kernel: hv_netvsc 002248a3-3dea-0022-48a3-3dea002248a3 eth0: Data path switched to VF: enP43880s1 Jul 2 00:21:25.474844 systemd-networkd[899]: enP43880s1: Link UP Jul 2 00:21:25.474991 systemd-networkd[899]: eth0: Link UP Jul 2 00:21:25.475156 systemd-networkd[899]: eth0: Gained carrier Jul 2 00:21:25.475166 systemd-networkd[899]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:21:25.487911 systemd-networkd[899]: enP43880s1: Gained carrier Jul 2 00:21:25.516718 systemd-networkd[899]: eth0: DHCPv4 address 10.200.8.10/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 2 00:21:26.333013 ignition[856]: Ignition 2.18.0 Jul 2 00:21:26.333026 ignition[856]: Stage: fetch-offline Jul 2 00:21:26.335021 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:21:26.333086 ignition[856]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:21:26.350869 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 00:21:26.333097 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:21:26.333277 ignition[856]: parsed url from cmdline: "" Jul 2 00:21:26.333282 ignition[856]: no config URL provided Jul 2 00:21:26.333290 ignition[856]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:21:26.333303 ignition[856]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:21:26.333315 ignition[856]: failed to fetch config: resource requires networking Jul 2 00:21:26.333575 ignition[856]: Ignition finished successfully Jul 2 00:21:26.366280 ignition[909]: Ignition 2.18.0 Jul 2 00:21:26.366286 ignition[909]: Stage: fetch Jul 2 00:21:26.366477 ignition[909]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:21:26.366488 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:21:26.366581 ignition[909]: parsed url from cmdline: "" Jul 2 00:21:26.366584 ignition[909]: no config URL provided Jul 2 00:21:26.366589 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:21:26.366596 ignition[909]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:21:26.366618 ignition[909]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 2 00:21:26.460383 ignition[909]: GET result: OK Jul 2 00:21:26.460556 ignition[909]: config has been read from IMDS userdata Jul 2 00:21:26.460592 ignition[909]: parsing config with SHA512: 71e9769845c406b26511fa4c7cf2f4059dd68824196aeaf5148a0affd674fcc279ec6055b268c4497178694db3cd23bc4d7928f2c1fa1efffb443b366c0d29be Jul 2 00:21:26.467943 unknown[909]: fetched base config from "system" Jul 2 00:21:26.467964 unknown[909]: fetched base config from "system" Jul 2 00:21:26.468854 ignition[909]: fetch: fetch complete Jul 2 00:21:26.467975 unknown[909]: fetched user config from "azure" Jul 2 00:21:26.468862 ignition[909]: fetch: fetch passed Jul 2 00:21:26.470530 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 00:21:26.468908 ignition[909]: Ignition finished successfully Jul 2 00:21:26.486939 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 00:21:26.503419 ignition[916]: Ignition 2.18.0 Jul 2 00:21:26.503432 ignition[916]: Stage: kargs Jul 2 00:21:26.503647 ignition[916]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:21:26.503694 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:21:26.507864 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 00:21:26.504567 ignition[916]: kargs: kargs passed Jul 2 00:21:26.504614 ignition[916]: Ignition finished successfully Jul 2 00:21:26.524768 systemd-networkd[899]: enP43880s1: Gained IPv6LL Jul 2 00:21:26.525146 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 00:21:26.540058 ignition[923]: Ignition 2.18.0 Jul 2 00:21:26.540069 ignition[923]: Stage: disks Jul 2 00:21:26.542693 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 00:21:26.540288 ignition[923]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:21:26.546688 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 00:21:26.540302 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:21:26.541243 ignition[923]: disks: disks passed Jul 2 00:21:26.541288 ignition[923]: Ignition finished successfully Jul 2 00:21:26.564616 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:21:26.571869 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:21:26.571974 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:21:26.572435 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:21:26.588987 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 00:21:26.643389 systemd-fsck[932]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 2 00:21:26.650125 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 00:21:26.665761 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 00:21:26.770002 kernel: EXT4-fs (sda9): mounted filesystem c5a17c06-b440-4aab-a0fa-5b60bb1d8586 r/w with ordered data mode. Quota mode: none. Jul 2 00:21:26.770598 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 00:21:26.773508 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 00:21:26.915762 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:21:26.922053 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 00:21:26.930913 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 2 00:21:26.946899 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (943) Jul 2 00:21:26.946926 kernel: BTRFS info (device sda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:21:26.946940 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:21:26.934416 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:21:26.955172 kernel: BTRFS info (device sda6): using free space tree Jul 2 00:21:26.934453 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:21:26.947745 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 00:21:26.967156 kernel: BTRFS info (device sda6): auto enabling async discard Jul 2 00:21:26.973815 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 00:21:26.980878 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:21:27.163877 systemd-networkd[899]: eth0: Gained IPv6LL Jul 2 00:21:27.527533 coreos-metadata[945]: Jul 02 00:21:27.527 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 2 00:21:27.534379 coreos-metadata[945]: Jul 02 00:21:27.534 INFO Fetch successful Jul 2 00:21:27.537316 coreos-metadata[945]: Jul 02 00:21:27.534 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 2 00:21:27.553648 coreos-metadata[945]: Jul 02 00:21:27.553 INFO Fetch successful Jul 2 00:21:27.556429 coreos-metadata[945]: Jul 02 00:21:27.553 INFO wrote hostname ci-3975.1.1-a-106c6d4ee2 to /sysroot/etc/hostname Jul 2 00:21:27.562518 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 00:21:27.709693 initrd-setup-root[972]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:21:27.737525 initrd-setup-root[979]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:21:27.743260 initrd-setup-root[986]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:21:27.748216 initrd-setup-root[993]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:21:28.244715 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 00:21:28.256796 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 00:21:28.262285 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 00:21:28.276209 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 00:21:28.282602 kernel: BTRFS info (device sda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:21:28.338717 ignition[1064]: INFO : Ignition 2.18.0 Jul 2 00:21:28.338717 ignition[1064]: INFO : Stage: mount Jul 2 00:21:28.338717 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:21:28.338717 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:21:28.359945 ignition[1064]: INFO : mount: mount passed Jul 2 00:21:28.359945 ignition[1064]: INFO : Ignition finished successfully Jul 2 00:21:28.339091 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 00:21:28.344220 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 00:21:28.366615 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 00:21:28.377544 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:21:28.402684 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1077) Jul 2 00:21:28.402735 kernel: BTRFS info (device sda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:21:28.406673 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:21:28.411019 kernel: BTRFS info (device sda6): using free space tree Jul 2 00:21:28.416679 kernel: BTRFS info (device sda6): auto enabling async discard Jul 2 00:21:28.418892 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:21:28.445962 ignition[1093]: INFO : Ignition 2.18.0 Jul 2 00:21:28.445962 ignition[1093]: INFO : Stage: files Jul 2 00:21:28.450924 ignition[1093]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:21:28.450924 ignition[1093]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:21:28.450924 ignition[1093]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:21:28.459464 ignition[1093]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:21:28.459464 ignition[1093]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:21:28.573164 ignition[1093]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:21:28.580741 ignition[1093]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:21:28.580741 ignition[1093]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:21:28.580741 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 00:21:28.580741 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 00:21:28.580741 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:21:28.580741 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 00:21:28.573708 unknown[1093]: wrote ssh authorized keys file for user: core Jul 2 00:21:28.679542 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 00:21:28.773001 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:21:28.778934 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:21:28.778934 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:21:28.778934 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:21:28.778934 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:21:28.778934 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:21:28.778934 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:21:28.778934 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:21:28.778934 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:21:28.778934 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:21:28.778934 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:21:28.778934 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:21:28.778934 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:21:28.778934 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:21:28.778934 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jul 2 00:21:29.471813 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 2 00:21:31.328958 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:21:31.328958 ignition[1093]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 2 00:21:31.340607 ignition[1093]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 00:21:31.346956 ignition[1093]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 00:21:31.346956 ignition[1093]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 2 00:21:31.346956 ignition[1093]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 2 00:21:31.354195 ignition[1093]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:21:31.354195 ignition[1093]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:21:31.354195 ignition[1093]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 2 00:21:31.354195 ignition[1093]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:21:31.354195 ignition[1093]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:21:31.354195 ignition[1093]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:21:31.354195 ignition[1093]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:21:31.354195 ignition[1093]: INFO : files: files passed Jul 2 00:21:31.354195 ignition[1093]: INFO : Ignition finished successfully Jul 2 00:21:31.348696 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 00:21:31.396984 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 00:21:31.408982 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 00:21:31.463932 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:21:31.464109 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 00:21:31.475612 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:21:31.480140 initrd-setup-root-after-ignition[1127]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:21:31.479803 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:21:31.491020 initrd-setup-root-after-ignition[1123]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:21:31.496157 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 00:21:31.506886 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 00:21:31.544158 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:21:31.544279 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 00:21:31.550021 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 00:21:31.558011 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 00:21:31.558172 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 00:21:31.560572 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 00:21:31.578755 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:21:31.591815 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 00:21:31.604080 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:21:31.604303 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:21:31.605380 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 00:21:31.605800 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:21:31.605937 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:21:31.606697 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 00:21:31.607160 systemd[1]: Stopped target basic.target - Basic System. Jul 2 00:21:31.607597 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 00:21:31.608059 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:21:31.608507 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 00:21:31.608936 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 00:21:31.609375 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:21:31.610335 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 00:21:31.610788 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 00:21:31.611228 systemd[1]: Stopped target swap.target - Swaps. Jul 2 00:21:31.611639 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:21:31.611781 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:21:31.612547 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:21:31.613101 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:21:31.613503 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 00:21:31.659737 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:21:31.663203 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:21:31.663367 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 00:21:31.722745 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:21:31.722956 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:21:31.735009 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:21:31.735208 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 00:21:31.740330 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 2 00:21:31.745705 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 00:21:31.754160 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 00:21:31.756545 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:21:31.758762 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:21:31.765893 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 00:21:31.773574 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:21:31.773780 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:21:31.777266 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:21:31.777421 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:21:31.786135 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:21:31.803383 ignition[1147]: INFO : Ignition 2.18.0 Jul 2 00:21:31.803383 ignition[1147]: INFO : Stage: umount Jul 2 00:21:31.803383 ignition[1147]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:21:31.803383 ignition[1147]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:21:31.803383 ignition[1147]: INFO : umount: umount passed Jul 2 00:21:31.803383 ignition[1147]: INFO : Ignition finished successfully Jul 2 00:21:31.786405 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 00:21:31.797564 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:21:31.797738 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 00:21:31.805129 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:21:31.805235 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 00:21:31.808372 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:21:31.808419 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 00:21:31.812787 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 00:21:31.812836 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 00:21:31.815477 systemd[1]: Stopped target network.target - Network. Jul 2 00:21:31.820518 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:21:31.823794 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:21:31.831307 systemd[1]: Stopped target paths.target - Path Units. Jul 2 00:21:31.869693 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:21:31.872959 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:21:31.880420 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 00:21:31.882910 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 00:21:31.887868 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:21:31.890266 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:21:31.892946 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:21:31.892988 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:21:31.893510 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:21:31.893555 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 00:21:31.894410 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 00:21:31.894446 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 00:21:31.894997 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 00:21:31.895349 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 00:21:31.896920 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:21:31.897418 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:21:31.897495 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 00:21:31.898475 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:21:31.898568 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 00:21:31.920775 systemd-networkd[899]: eth0: DHCPv6 lease lost Jul 2 00:21:31.923939 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:21:31.924049 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 00:21:31.961475 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:21:31.961609 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 00:21:31.969797 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:21:31.969864 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:21:31.985772 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 00:21:31.991050 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:21:31.991122 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:21:32.000700 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:21:32.000763 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:21:32.008625 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:21:32.008697 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 00:21:32.017267 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 00:21:32.017331 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:21:32.026744 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:21:32.046937 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:21:32.047097 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:21:32.053624 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:21:32.053717 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 00:21:32.065318 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:21:32.065367 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:21:32.073709 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:21:32.073779 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:21:32.086265 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:21:32.088887 kernel: hv_netvsc 002248a3-3dea-0022-48a3-3dea002248a3 eth0: Data path switched from VF: enP43880s1 Jul 2 00:21:32.086332 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 00:21:32.092467 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:21:32.092516 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:21:32.106906 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 00:21:32.109941 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:21:32.109998 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:21:32.113465 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:21:32.113515 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:21:32.131392 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:21:32.131529 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 00:21:32.136902 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:21:32.136989 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 00:21:32.146252 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 00:21:32.161853 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 00:21:32.258536 systemd[1]: Switching root. Jul 2 00:21:32.291242 systemd-journald[176]: Journal stopped Jul 2 00:21:38.111645 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Jul 2 00:21:38.119053 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:21:38.119078 kernel: SELinux: policy capability open_perms=1 Jul 2 00:21:38.119094 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:21:38.119108 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:21:38.119120 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:21:38.119135 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:21:38.119153 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:21:38.119182 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:21:38.119195 kernel: audit: type=1403 audit(1719879694.480:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 00:21:38.119213 systemd[1]: Successfully loaded SELinux policy in 134.494ms. Jul 2 00:21:38.119229 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.832ms. Jul 2 00:21:38.119243 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:21:38.119253 systemd[1]: Detected virtualization microsoft. Jul 2 00:21:38.119267 systemd[1]: Detected architecture x86-64. Jul 2 00:21:38.119277 systemd[1]: Detected first boot. Jul 2 00:21:38.119287 systemd[1]: Hostname set to . Jul 2 00:21:38.119296 systemd[1]: Initializing machine ID from random generator. Jul 2 00:21:38.119306 zram_generator::config[1207]: No configuration found. Jul 2 00:21:38.119319 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:21:38.119331 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:21:38.119346 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 2 00:21:38.119363 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 00:21:38.119379 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 00:21:38.119395 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 00:21:38.119413 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 00:21:38.119438 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 00:21:38.119455 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 00:21:38.119472 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 00:21:38.119490 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 00:21:38.119508 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:21:38.119526 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:21:38.119545 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 00:21:38.119566 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 00:21:38.119583 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 00:21:38.119602 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:21:38.119620 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 2 00:21:38.119639 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:21:38.119668 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 00:21:38.119687 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:21:38.119712 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:21:38.119730 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:21:38.119751 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:21:38.119768 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 00:21:38.119786 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 00:21:38.119803 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:21:38.119819 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:21:38.119838 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:21:38.119855 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:21:38.119876 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:21:38.119893 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 00:21:38.119912 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 00:21:38.119930 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 00:21:38.119947 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 00:21:38.119969 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:21:38.119987 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 00:21:38.120005 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 00:21:38.120022 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 00:21:38.120040 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 00:21:38.120059 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:21:38.120077 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:21:38.120095 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 00:21:38.120116 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:21:38.120135 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:21:38.120153 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:21:38.120171 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 00:21:38.120189 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:21:38.120207 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:21:38.120226 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 2 00:21:38.120245 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 2 00:21:38.120266 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:21:38.120284 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:21:38.120328 systemd-journald[1312]: Collecting audit messages is disabled. Jul 2 00:21:38.120365 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 00:21:38.120387 systemd-journald[1312]: Journal started Jul 2 00:21:38.120423 systemd-journald[1312]: Runtime Journal (/run/log/journal/82d8ce8c4a0d47d48c802ee671d9cfbd) is 8.0M, max 158.8M, 150.8M free. Jul 2 00:21:38.137687 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 00:21:38.137777 kernel: fuse: init (API version 7.39) Jul 2 00:21:38.161699 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:21:38.180567 kernel: loop: module loaded Jul 2 00:21:38.180638 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:21:38.199412 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:21:38.201257 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 00:21:38.204900 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 00:21:38.208085 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 00:21:38.211053 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 00:21:38.214138 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 00:21:38.217285 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 00:21:38.220372 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 00:21:38.223851 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:21:38.229910 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:21:38.230230 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 00:21:38.232678 kernel: ACPI: bus type drm_connector registered Jul 2 00:21:38.234309 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:21:38.234593 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:21:38.238118 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:21:38.238288 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:21:38.241431 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:21:38.241624 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:21:38.245395 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:21:38.245590 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 00:21:38.248980 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:21:38.249304 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:21:38.253060 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:21:38.257108 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 00:21:38.261605 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 00:21:38.283079 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 00:21:38.292804 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 00:21:38.303784 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 00:21:38.309328 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:21:38.390863 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 00:21:38.396868 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 00:21:38.400238 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:21:38.409910 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 00:21:38.417874 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:21:38.419002 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:21:38.424822 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:21:38.430403 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:21:38.437107 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 00:21:38.440606 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 00:21:38.444524 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 00:21:38.457023 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 00:21:38.472772 systemd-journald[1312]: Time spent on flushing to /var/log/journal/82d8ce8c4a0d47d48c802ee671d9cfbd is 24.144ms for 950 entries. Jul 2 00:21:38.472772 systemd-journald[1312]: System Journal (/var/log/journal/82d8ce8c4a0d47d48c802ee671d9cfbd) is 8.0M, max 2.6G, 2.6G free. Jul 2 00:21:38.521058 systemd-journald[1312]: Received client request to flush runtime journal. Jul 2 00:21:38.468881 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 00:21:38.496866 udevadm[1376]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 00:21:38.523508 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 00:21:38.535778 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:21:38.543015 systemd-tmpfiles[1369]: ACLs are not supported, ignoring. Jul 2 00:21:38.543037 systemd-tmpfiles[1369]: ACLs are not supported, ignoring. Jul 2 00:21:38.547892 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:21:38.558885 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 00:21:38.793371 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 00:21:38.802992 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:21:38.825709 systemd-tmpfiles[1390]: ACLs are not supported, ignoring. Jul 2 00:21:38.825736 systemd-tmpfiles[1390]: ACLs are not supported, ignoring. Jul 2 00:21:38.830456 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:21:39.656477 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 00:21:39.664876 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:21:39.691708 systemd-udevd[1399]: Using default interface naming scheme 'v255'. Jul 2 00:21:39.995855 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:21:40.009873 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:21:40.092688 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1410) Jul 2 00:21:40.083654 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 00:21:40.091016 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jul 2 00:21:40.201687 kernel: hv_vmbus: registering driver hv_balloon Jul 2 00:21:40.207675 kernel: hv_vmbus: registering driver hyperv_fb Jul 2 00:21:40.214677 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 2 00:21:40.218676 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 2 00:21:40.218729 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 00:21:40.225314 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 2 00:21:40.227047 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 00:21:40.238805 kernel: Console: switching to colour dummy device 80x25 Jul 2 00:21:40.248742 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 00:21:40.374417 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:21:40.421630 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:21:40.422055 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:21:40.433535 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:21:40.446969 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:21:40.447282 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:21:40.456912 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:21:40.584649 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1416) Jul 2 00:21:40.564611 systemd-networkd[1407]: lo: Link UP Jul 2 00:21:40.564617 systemd-networkd[1407]: lo: Gained carrier Jul 2 00:21:40.569604 systemd-networkd[1407]: Enumeration completed Jul 2 00:21:40.572849 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:21:40.574804 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:21:40.574809 systemd-networkd[1407]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:21:40.605294 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 00:21:40.646698 kernel: mlx5_core ab68:00:02.0 enP43880s1: Link up Jul 2 00:21:40.670728 kernel: hv_netvsc 002248a3-3dea-0022-48a3-3dea002248a3 eth0: Data path switched to VF: enP43880s1 Jul 2 00:21:40.673950 systemd-networkd[1407]: enP43880s1: Link UP Jul 2 00:21:40.674181 systemd-networkd[1407]: eth0: Link UP Jul 2 00:21:40.674243 systemd-networkd[1407]: eth0: Gained carrier Jul 2 00:21:40.674345 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:21:40.681991 systemd-networkd[1407]: enP43880s1: Gained carrier Jul 2 00:21:40.718737 systemd-networkd[1407]: eth0: DHCPv4 address 10.200.8.10/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 2 00:21:40.746523 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jul 2 00:21:40.747544 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 2 00:21:40.791509 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 00:21:40.802797 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 00:21:40.869372 lvm[1492]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:21:40.900811 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 00:21:40.901281 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:21:40.911054 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 00:21:40.916018 lvm[1496]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:21:40.946834 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 00:21:40.951440 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:21:40.955003 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:21:40.955044 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:21:40.958087 systemd[1]: Reached target machines.target - Containers. Jul 2 00:21:40.962052 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 00:21:40.971821 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 00:21:40.976313 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 00:21:40.979110 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:21:40.987840 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 00:21:40.994531 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 00:21:41.005000 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 00:21:41.010084 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:21:41.013999 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 00:21:41.067144 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 00:21:41.091683 kernel: loop0: detected capacity change from 0 to 139904 Jul 2 00:21:41.112451 kernel: block loop0: the capability attribute has been deprecated. Jul 2 00:21:41.128107 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:21:41.129109 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 00:21:41.426689 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:21:41.459695 kernel: loop1: detected capacity change from 0 to 56904 Jul 2 00:21:41.725686 kernel: loop2: detected capacity change from 0 to 80568 Jul 2 00:21:41.755943 systemd-networkd[1407]: enP43880s1: Gained IPv6LL Jul 2 00:21:41.968688 kernel: loop3: detected capacity change from 0 to 209816 Jul 2 00:21:42.009682 kernel: loop4: detected capacity change from 0 to 139904 Jul 2 00:21:42.025693 kernel: loop5: detected capacity change from 0 to 56904 Jul 2 00:21:42.032677 kernel: loop6: detected capacity change from 0 to 80568 Jul 2 00:21:42.042682 kernel: loop7: detected capacity change from 0 to 209816 Jul 2 00:21:42.051019 (sd-merge)[1520]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 2 00:21:42.051592 (sd-merge)[1520]: Merged extensions into '/usr'. Jul 2 00:21:42.055239 systemd[1]: Reloading requested from client PID 1505 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 00:21:42.055255 systemd[1]: Reloading... Jul 2 00:21:42.109699 zram_generator::config[1543]: No configuration found. Jul 2 00:21:42.267844 systemd-networkd[1407]: eth0: Gained IPv6LL Jul 2 00:21:42.272132 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:21:42.353861 systemd[1]: Reloading finished in 297 ms. Jul 2 00:21:42.368892 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 00:21:42.373279 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 00:21:42.385262 systemd[1]: Starting ensure-sysext.service... Jul 2 00:21:42.392963 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:21:42.401744 systemd[1]: Reloading requested from client PID 1612 ('systemctl') (unit ensure-sysext.service)... Jul 2 00:21:42.401907 systemd[1]: Reloading... Jul 2 00:21:42.459814 zram_generator::config[1638]: No configuration found. Jul 2 00:21:42.466427 systemd-tmpfiles[1614]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:21:42.466958 systemd-tmpfiles[1614]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 00:21:42.468845 systemd-tmpfiles[1614]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:21:42.469377 systemd-tmpfiles[1614]: ACLs are not supported, ignoring. Jul 2 00:21:42.469558 systemd-tmpfiles[1614]: ACLs are not supported, ignoring. Jul 2 00:21:42.496831 systemd-tmpfiles[1614]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:21:42.496848 systemd-tmpfiles[1614]: Skipping /boot Jul 2 00:21:42.508256 systemd-tmpfiles[1614]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:21:42.508270 systemd-tmpfiles[1614]: Skipping /boot Jul 2 00:21:42.626283 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:21:42.704848 systemd[1]: Reloading finished in 302 ms. Jul 2 00:21:42.721146 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:21:42.741206 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:21:42.747698 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 00:21:42.754835 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 00:21:42.766824 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:21:42.772898 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 00:21:42.785409 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:21:42.786005 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:21:42.789914 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:21:42.802958 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:21:42.811064 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:21:42.833741 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:21:42.833951 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:21:42.835654 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:21:42.838762 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:21:42.847414 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:21:42.847633 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:21:42.852343 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:21:42.853104 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:21:42.865674 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:21:42.866072 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:21:42.869853 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 00:21:42.874135 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 00:21:42.890150 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:21:42.890581 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:21:42.900955 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:21:42.912617 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:21:42.928956 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:21:42.936247 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:21:42.936568 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:21:42.944308 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:21:42.944540 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:21:42.948680 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:21:42.948892 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:21:42.953201 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:21:42.953424 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:21:42.959167 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:21:42.959532 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:21:42.966335 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:21:42.966697 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:21:42.969842 augenrules[1748]: No rules Jul 2 00:21:42.972168 systemd-resolved[1712]: Positive Trust Anchors: Jul 2 00:21:42.972182 systemd-resolved[1712]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:21:42.972233 systemd-resolved[1712]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:21:42.973036 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:21:42.990699 systemd-resolved[1712]: Using system hostname 'ci-3975.1.1-a-106c6d4ee2'. Jul 2 00:21:42.992998 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:21:42.999750 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:21:43.008985 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:21:43.012244 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:21:43.012628 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 00:21:43.015644 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:21:43.021990 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:21:43.026093 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:21:43.029741 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:21:43.029960 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:21:43.033782 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:21:43.034007 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:21:43.037549 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:21:43.037903 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:21:43.041617 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:21:43.041805 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:21:43.057488 systemd[1]: Finished ensure-sysext.service. Jul 2 00:21:43.066226 systemd[1]: Reached target network.target - Network. Jul 2 00:21:43.068823 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 00:21:43.071838 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:21:43.075051 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:21:43.075102 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:21:43.285573 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 00:21:43.290432 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:21:46.097143 ldconfig[1502]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:21:46.111694 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 00:21:46.123865 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 00:21:46.133693 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 00:21:46.137231 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:21:46.140619 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 00:21:46.144304 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 00:21:46.147951 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 00:21:46.151284 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 00:21:46.154827 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 00:21:46.158177 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:21:46.158259 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:21:46.160498 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:21:46.163980 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 00:21:46.168873 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 00:21:46.172813 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 00:21:46.176677 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 00:21:46.179794 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:21:46.182543 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:21:46.185269 systemd[1]: System is tainted: cgroupsv1 Jul 2 00:21:46.185330 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:21:46.185371 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:21:46.193741 systemd[1]: Starting chronyd.service - NTP client/server... Jul 2 00:21:46.199758 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 00:21:46.206835 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 2 00:21:46.211638 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 00:21:46.228755 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 00:21:46.243921 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 00:21:46.247444 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 00:21:46.252226 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:21:46.259817 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 00:21:46.264246 (chronyd)[1784]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 2 00:21:46.268693 jq[1789]: false Jul 2 00:21:46.269247 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 00:21:46.286275 chronyd[1801]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 2 00:21:46.286769 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 00:21:46.293909 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 00:21:46.302862 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 00:21:46.324911 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 00:21:46.331332 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:21:46.335185 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 00:21:46.338701 extend-filesystems[1790]: Found loop4 Jul 2 00:21:46.348751 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 00:21:46.350841 chronyd[1801]: Timezone right/UTC failed leap second check, ignoring Jul 2 00:21:46.366544 extend-filesystems[1790]: Found loop5 Jul 2 00:21:46.366544 extend-filesystems[1790]: Found loop6 Jul 2 00:21:46.366544 extend-filesystems[1790]: Found loop7 Jul 2 00:21:46.366544 extend-filesystems[1790]: Found sda Jul 2 00:21:46.366544 extend-filesystems[1790]: Found sda1 Jul 2 00:21:46.366544 extend-filesystems[1790]: Found sda2 Jul 2 00:21:46.366544 extend-filesystems[1790]: Found sda3 Jul 2 00:21:46.366544 extend-filesystems[1790]: Found usr Jul 2 00:21:46.366544 extend-filesystems[1790]: Found sda4 Jul 2 00:21:46.366544 extend-filesystems[1790]: Found sda6 Jul 2 00:21:46.366544 extend-filesystems[1790]: Found sda7 Jul 2 00:21:46.366544 extend-filesystems[1790]: Found sda9 Jul 2 00:21:46.366544 extend-filesystems[1790]: Checking size of /dev/sda9 Jul 2 00:21:46.449798 update_engine[1811]: I0702 00:21:46.412987 1811 main.cc:92] Flatcar Update Engine starting Jul 2 00:21:46.351080 chronyd[1801]: Loaded seccomp filter (level 2) Jul 2 00:21:46.363769 systemd[1]: Started chronyd.service - NTP client/server. Jul 2 00:21:46.450346 extend-filesystems[1790]: Old size kept for /dev/sda9 Jul 2 00:21:46.450346 extend-filesystems[1790]: Found sr0 Jul 2 00:21:46.372878 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:21:46.483007 jq[1812]: true Jul 2 00:21:46.373169 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 00:21:46.374183 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:21:46.374482 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 00:21:46.404249 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:21:46.404566 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 00:21:46.423068 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:21:46.423298 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 00:21:46.427466 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 00:21:46.506766 jq[1836]: true Jul 2 00:21:46.515469 (ntainerd)[1837]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 00:21:46.518941 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 00:21:46.518195 dbus-daemon[1788]: [system] SELinux support is enabled Jul 2 00:21:46.527831 systemd-logind[1807]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 00:21:46.533114 systemd-logind[1807]: New seat seat0. Jul 2 00:21:46.541063 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 00:21:46.552454 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:21:46.552499 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 00:21:46.556855 tar[1823]: linux-amd64/helm Jul 2 00:21:46.557411 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:21:46.557443 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 00:21:46.567465 update_engine[1811]: I0702 00:21:46.562108 1811 update_check_scheduler.cc:74] Next update check in 4m36s Jul 2 00:21:46.565008 systemd[1]: Started update-engine.service - Update Engine. Jul 2 00:21:46.564159 dbus-daemon[1788]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 00:21:46.569204 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:21:46.577495 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 00:21:46.697293 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1877) Jul 2 00:21:46.708809 bash[1876]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:21:46.710137 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 00:21:46.715510 coreos-metadata[1786]: Jul 02 00:21:46.715 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 2 00:21:46.722875 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 2 00:21:46.745244 coreos-metadata[1786]: Jul 02 00:21:46.729 INFO Fetch successful Jul 2 00:21:46.745244 coreos-metadata[1786]: Jul 02 00:21:46.733 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 2 00:21:46.745244 coreos-metadata[1786]: Jul 02 00:21:46.744 INFO Fetch successful Jul 2 00:21:46.747322 coreos-metadata[1786]: Jul 02 00:21:46.747 INFO Fetching http://168.63.129.16/machine/6253dbb1-9456-4dab-80a6-f5ab1eb1ab10/42f251e2%2D7be9%2D4474%2D86b6%2D7d16b3c11647.%5Fci%2D3975.1.1%2Da%2D106c6d4ee2?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 2 00:21:46.753382 coreos-metadata[1786]: Jul 02 00:21:46.752 INFO Fetch successful Jul 2 00:21:46.753382 coreos-metadata[1786]: Jul 02 00:21:46.753 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 2 00:21:46.770463 coreos-metadata[1786]: Jul 02 00:21:46.768 INFO Fetch successful Jul 2 00:21:46.869646 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 2 00:21:46.878990 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 00:21:46.886551 locksmithd[1859]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:21:46.960598 sshd_keygen[1830]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:21:47.007211 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 00:21:47.023145 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 00:21:47.036048 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 2 00:21:47.060090 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:21:47.060422 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 00:21:47.084989 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 00:21:47.098846 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 2 00:21:47.115093 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 00:21:47.133150 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 00:21:47.144731 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 00:21:47.149267 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 00:21:47.435723 tar[1823]: linux-amd64/LICENSE Jul 2 00:21:47.435723 tar[1823]: linux-amd64/README.md Jul 2 00:21:47.447267 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 00:21:47.723195 containerd[1837]: time="2024-07-02T00:21:47.722829900Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 00:21:47.762193 containerd[1837]: time="2024-07-02T00:21:47.762141400Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 00:21:47.762193 containerd[1837]: time="2024-07-02T00:21:47.762192800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:21:47.763873 containerd[1837]: time="2024-07-02T00:21:47.763832700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:21:47.763873 containerd[1837]: time="2024-07-02T00:21:47.763864500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:21:47.764308 containerd[1837]: time="2024-07-02T00:21:47.764201900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:21:47.764308 containerd[1837]: time="2024-07-02T00:21:47.764230300Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:21:47.764405 containerd[1837]: time="2024-07-02T00:21:47.764339100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 00:21:47.764441 containerd[1837]: time="2024-07-02T00:21:47.764402800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:21:47.764441 containerd[1837]: time="2024-07-02T00:21:47.764419700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:21:47.764515 containerd[1837]: time="2024-07-02T00:21:47.764497200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:21:47.765186 containerd[1837]: time="2024-07-02T00:21:47.764760400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:21:47.765186 containerd[1837]: time="2024-07-02T00:21:47.764796400Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:21:47.765186 containerd[1837]: time="2024-07-02T00:21:47.764812700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:21:47.765186 containerd[1837]: time="2024-07-02T00:21:47.765027300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:21:47.765186 containerd[1837]: time="2024-07-02T00:21:47.765050600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:21:47.765186 containerd[1837]: time="2024-07-02T00:21:47.765128700Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:21:47.765186 containerd[1837]: time="2024-07-02T00:21:47.765144700Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:21:47.780014 containerd[1837]: time="2024-07-02T00:21:47.779980500Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:21:47.780130 containerd[1837]: time="2024-07-02T00:21:47.780028900Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:21:47.780130 containerd[1837]: time="2024-07-02T00:21:47.780048200Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:21:47.780130 containerd[1837]: time="2024-07-02T00:21:47.780094600Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 00:21:47.780130 containerd[1837]: time="2024-07-02T00:21:47.780116300Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 00:21:47.780271 containerd[1837]: time="2024-07-02T00:21:47.780131100Z" level=info msg="NRI interface is disabled by configuration." Jul 2 00:21:47.780271 containerd[1837]: time="2024-07-02T00:21:47.780185400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:21:47.780337 containerd[1837]: time="2024-07-02T00:21:47.780325200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 00:21:47.780378 containerd[1837]: time="2024-07-02T00:21:47.780349100Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 00:21:47.780378 containerd[1837]: time="2024-07-02T00:21:47.780368300Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 00:21:47.780441 containerd[1837]: time="2024-07-02T00:21:47.780388000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 00:21:47.780441 containerd[1837]: time="2024-07-02T00:21:47.780408400Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:21:47.780441 containerd[1837]: time="2024-07-02T00:21:47.780431200Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:21:47.780547 containerd[1837]: time="2024-07-02T00:21:47.780449700Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:21:47.780547 containerd[1837]: time="2024-07-02T00:21:47.780467100Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:21:47.780547 containerd[1837]: time="2024-07-02T00:21:47.780485600Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:21:47.780547 containerd[1837]: time="2024-07-02T00:21:47.780513200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:21:47.780547 containerd[1837]: time="2024-07-02T00:21:47.780534200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:21:47.780734 containerd[1837]: time="2024-07-02T00:21:47.780551000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:21:47.780734 containerd[1837]: time="2024-07-02T00:21:47.780687900Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:21:47.782091 containerd[1837]: time="2024-07-02T00:21:47.781127200Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:21:47.782091 containerd[1837]: time="2024-07-02T00:21:47.781165200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:21:47.782091 containerd[1837]: time="2024-07-02T00:21:47.781184900Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 00:21:47.782091 containerd[1837]: time="2024-07-02T00:21:47.781215000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:21:47.782091 containerd[1837]: time="2024-07-02T00:21:47.781281100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:21:47.782091 containerd[1837]: time="2024-07-02T00:21:47.781298700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:21:47.782091 containerd[1837]: time="2024-07-02T00:21:47.781315800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:21:47.782091 containerd[1837]: time="2024-07-02T00:21:47.781332400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:21:47.782091 containerd[1837]: time="2024-07-02T00:21:47.781349500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:21:47.782091 containerd[1837]: time="2024-07-02T00:21:47.781376200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:21:47.782091 containerd[1837]: time="2024-07-02T00:21:47.781395200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:21:47.782091 containerd[1837]: time="2024-07-02T00:21:47.781411600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:21:47.782091 containerd[1837]: time="2024-07-02T00:21:47.781428800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:21:47.782091 containerd[1837]: time="2024-07-02T00:21:47.781572400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 00:21:47.783032 containerd[1837]: time="2024-07-02T00:21:47.781594500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 00:21:47.783032 containerd[1837]: time="2024-07-02T00:21:47.781611600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:21:47.783032 containerd[1837]: time="2024-07-02T00:21:47.781629200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 00:21:47.783032 containerd[1837]: time="2024-07-02T00:21:47.781647500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:21:47.783032 containerd[1837]: time="2024-07-02T00:21:47.781682700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 00:21:47.783032 containerd[1837]: time="2024-07-02T00:21:47.781700300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:21:47.783032 containerd[1837]: time="2024-07-02T00:21:47.781716400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:21:47.783697 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 00:21:47.783797 containerd[1837]: time="2024-07-02T00:21:47.782054400Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:21:47.783797 containerd[1837]: time="2024-07-02T00:21:47.782126600Z" level=info msg="Connect containerd service" Jul 2 00:21:47.783797 containerd[1837]: time="2024-07-02T00:21:47.782170800Z" level=info msg="using legacy CRI server" Jul 2 00:21:47.783797 containerd[1837]: time="2024-07-02T00:21:47.782181800Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 00:21:47.783797 containerd[1837]: time="2024-07-02T00:21:47.782293400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:21:47.783797 containerd[1837]: time="2024-07-02T00:21:47.782944700Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:21:47.783797 containerd[1837]: time="2024-07-02T00:21:47.782998600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:21:47.783797 containerd[1837]: time="2024-07-02T00:21:47.783023700Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 00:21:47.783797 containerd[1837]: time="2024-07-02T00:21:47.783039000Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:21:47.783797 containerd[1837]: time="2024-07-02T00:21:47.783057100Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 00:21:47.783797 containerd[1837]: time="2024-07-02T00:21:47.783071500Z" level=info msg="Start subscribing containerd event" Jul 2 00:21:47.783797 containerd[1837]: time="2024-07-02T00:21:47.783131600Z" level=info msg="Start recovering state" Jul 2 00:21:47.783797 containerd[1837]: time="2024-07-02T00:21:47.783202500Z" level=info msg="Start event monitor" Jul 2 00:21:47.783797 containerd[1837]: time="2024-07-02T00:21:47.783216100Z" level=info msg="Start snapshots syncer" Jul 2 00:21:47.783797 containerd[1837]: time="2024-07-02T00:21:47.783227500Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:21:47.783797 containerd[1837]: time="2024-07-02T00:21:47.783236500Z" level=info msg="Start streaming server" Jul 2 00:21:47.783797 containerd[1837]: time="2024-07-02T00:21:47.783418600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:21:47.783797 containerd[1837]: time="2024-07-02T00:21:47.783479300Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:21:47.788048 containerd[1837]: time="2024-07-02T00:21:47.784705800Z" level=info msg="containerd successfully booted in 0.062995s" Jul 2 00:21:48.098140 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:21:48.105401 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 00:21:48.110450 systemd[1]: Startup finished in 822ms (firmware) + 24.215s (loader) + 14.186s (kernel) + 13.763s (userspace) = 52.987s. Jul 2 00:21:48.118105 (kubelet)[1969]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:21:48.490020 waagent[1944]: 2024-07-02T00:21:48.472551Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jul 2 00:21:48.490020 waagent[1944]: 2024-07-02T00:21:48.473013Z INFO Daemon Daemon OS: flatcar 3975.1.1 Jul 2 00:21:48.490020 waagent[1944]: 2024-07-02T00:21:48.473606Z INFO Daemon Daemon Python: 3.11.9 Jul 2 00:21:48.490020 waagent[1944]: 2024-07-02T00:21:48.474423Z INFO Daemon Daemon Run daemon Jul 2 00:21:48.490020 waagent[1944]: 2024-07-02T00:21:48.475289Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3975.1.1' Jul 2 00:21:48.490020 waagent[1944]: 2024-07-02T00:21:48.476147Z INFO Daemon Daemon Using waagent for provisioning Jul 2 00:21:48.490020 waagent[1944]: 2024-07-02T00:21:48.476779Z INFO Daemon Daemon Activate resource disk Jul 2 00:21:48.490020 waagent[1944]: 2024-07-02T00:21:48.477558Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 2 00:21:48.490020 waagent[1944]: 2024-07-02T00:21:48.481953Z INFO Daemon Daemon Found device: None Jul 2 00:21:48.490020 waagent[1944]: 2024-07-02T00:21:48.482630Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 2 00:21:48.490020 waagent[1944]: 2024-07-02T00:21:48.483090Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 2 00:21:48.490020 waagent[1944]: 2024-07-02T00:21:48.485596Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 2 00:21:48.490020 waagent[1944]: 2024-07-02T00:21:48.489103Z INFO Daemon Daemon Running default provisioning handler Jul 2 00:21:48.519778 waagent[1944]: 2024-07-02T00:21:48.519572Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 2 00:21:48.535715 waagent[1944]: 2024-07-02T00:21:48.521767Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 2 00:21:48.535715 waagent[1944]: 2024-07-02T00:21:48.522777Z INFO Daemon Daemon cloud-init is enabled: False Jul 2 00:21:48.535715 waagent[1944]: 2024-07-02T00:21:48.523372Z INFO Daemon Daemon Copying ovf-env.xml Jul 2 00:21:48.631615 login[1948]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 00:21:48.637024 login[1949]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 00:21:48.647024 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 00:21:48.650675 waagent[1944]: 2024-07-02T00:21:48.649138Z INFO Daemon Daemon Successfully mounted dvd Jul 2 00:21:48.655029 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 00:21:48.666288 systemd-logind[1807]: New session 2 of user core. Jul 2 00:21:48.673618 systemd-logind[1807]: New session 1 of user core. Jul 2 00:21:48.683974 waagent[1944]: 2024-07-02T00:21:48.682011Z INFO Daemon Daemon Detect protocol endpoint Jul 2 00:21:48.683974 waagent[1944]: 2024-07-02T00:21:48.682318Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 2 00:21:48.683974 waagent[1944]: 2024-07-02T00:21:48.683451Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 2 00:21:48.684508 waagent[1944]: 2024-07-02T00:21:48.684468Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 2 00:21:48.685572 waagent[1944]: 2024-07-02T00:21:48.685530Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 2 00:21:48.686404 waagent[1944]: 2024-07-02T00:21:48.686367Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 2 00:21:48.695691 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 2 00:21:48.702832 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 00:21:48.704997 waagent[1944]: 2024-07-02T00:21:48.699630Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 2 00:21:48.705501 waagent[1944]: 2024-07-02T00:21:48.705476Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 2 00:21:48.714803 waagent[1944]: 2024-07-02T00:21:48.714726Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 2 00:21:48.723189 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 00:21:48.726490 (systemd)[1993]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:48.806939 waagent[1944]: 2024-07-02T00:21:48.805569Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 2 00:21:48.806939 waagent[1944]: 2024-07-02T00:21:48.805929Z INFO Daemon Daemon Forcing an update of the goal state. Jul 2 00:21:48.815409 waagent[1944]: 2024-07-02T00:21:48.814449Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 2 00:21:48.832556 waagent[1944]: 2024-07-02T00:21:48.832477Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.151 Jul 2 00:21:48.837917 waagent[1944]: 2024-07-02T00:21:48.836182Z INFO Daemon Jul 2 00:21:48.837917 waagent[1944]: 2024-07-02T00:21:48.836376Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 9ae31a52-3613-4831-94a3-d35b6fc8bdbc eTag: 6089941020748215429 source: Fabric] Jul 2 00:21:48.838054 waagent[1944]: 2024-07-02T00:21:48.837631Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 2 00:21:48.839311 waagent[1944]: 2024-07-02T00:21:48.839268Z INFO Daemon Jul 2 00:21:48.840149 waagent[1944]: 2024-07-02T00:21:48.840111Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 2 00:21:48.844490 waagent[1944]: 2024-07-02T00:21:48.844456Z INFO Daemon Daemon Downloading artifacts profile blob Jul 2 00:21:48.936773 waagent[1944]: 2024-07-02T00:21:48.936641Z INFO Daemon Downloaded certificate {'thumbprint': 'F53989DB285977E86BB2921F4C82B84C43B1838B', 'hasPrivateKey': True} Jul 2 00:21:48.937624 waagent[1944]: 2024-07-02T00:21:48.937574Z INFO Daemon Downloaded certificate {'thumbprint': '6A2690C0B1A6DDC6614417A08D9D5C5BC6383874', 'hasPrivateKey': False} Jul 2 00:21:48.938937 waagent[1944]: 2024-07-02T00:21:48.938892Z INFO Daemon Fetch goal state completed Jul 2 00:21:48.952315 waagent[1944]: 2024-07-02T00:21:48.952272Z INFO Daemon Daemon Starting provisioning Jul 2 00:21:48.954982 waagent[1944]: 2024-07-02T00:21:48.954938Z INFO Daemon Daemon Handle ovf-env.xml. Jul 2 00:21:48.955882 waagent[1944]: 2024-07-02T00:21:48.955844Z INFO Daemon Daemon Set hostname [ci-3975.1.1-a-106c6d4ee2] Jul 2 00:21:48.986684 waagent[1944]: 2024-07-02T00:21:48.984334Z INFO Daemon Daemon Publish hostname [ci-3975.1.1-a-106c6d4ee2] Jul 2 00:21:48.988456 waagent[1944]: 2024-07-02T00:21:48.988387Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 2 00:21:48.991379 waagent[1944]: 2024-07-02T00:21:48.991322Z INFO Daemon Daemon Primary interface is [eth0] Jul 2 00:21:49.024361 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:21:49.024370 systemd-networkd[1407]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:21:49.024420 systemd-networkd[1407]: eth0: DHCP lease lost Jul 2 00:21:49.028968 waagent[1944]: 2024-07-02T00:21:49.027407Z INFO Daemon Daemon Create user account if not exists Jul 2 00:21:49.028968 waagent[1944]: 2024-07-02T00:21:49.027817Z INFO Daemon Daemon User core already exists, skip useradd Jul 2 00:21:49.028968 waagent[1944]: 2024-07-02T00:21:49.028587Z INFO Daemon Daemon Configure sudoer Jul 2 00:21:49.030206 waagent[1944]: 2024-07-02T00:21:49.030153Z INFO Daemon Daemon Configure sshd Jul 2 00:21:49.032022 waagent[1944]: 2024-07-02T00:21:49.031974Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 2 00:21:49.032950 waagent[1944]: 2024-07-02T00:21:49.032911Z INFO Daemon Daemon Deploy ssh public key. Jul 2 00:21:49.046127 systemd-networkd[1407]: eth0: DHCPv6 lease lost Jul 2 00:21:49.074337 systemd[1993]: Queued start job for default target default.target. Jul 2 00:21:49.075190 systemd[1993]: Created slice app.slice - User Application Slice. Jul 2 00:21:49.075577 systemd[1993]: Reached target paths.target - Paths. Jul 2 00:21:49.075596 systemd[1993]: Reached target timers.target - Timers. Jul 2 00:21:49.080695 systemd[1993]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 00:21:49.090732 systemd-networkd[1407]: eth0: DHCPv4 address 10.200.8.10/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 2 00:21:49.094448 systemd[1993]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 00:21:49.094522 systemd[1993]: Reached target sockets.target - Sockets. Jul 2 00:21:49.094539 systemd[1993]: Reached target basic.target - Basic System. Jul 2 00:21:49.094588 systemd[1993]: Reached target default.target - Main User Target. Jul 2 00:21:49.094620 systemd[1993]: Startup finished in 360ms. Jul 2 00:21:49.095198 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 00:21:49.101809 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 00:21:49.104417 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 00:21:49.312908 kubelet[1969]: E0702 00:21:49.312820 1969 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:21:49.315607 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:21:49.315952 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:21:50.377891 waagent[1944]: 2024-07-02T00:21:50.377807Z INFO Daemon Daemon Provisioning complete Jul 2 00:21:50.391704 waagent[1944]: 2024-07-02T00:21:50.391637Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 2 00:21:50.399616 waagent[1944]: 2024-07-02T00:21:50.392017Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 2 00:21:50.399616 waagent[1944]: 2024-07-02T00:21:50.393060Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jul 2 00:21:50.519983 waagent[2039]: 2024-07-02T00:21:50.519880Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jul 2 00:21:50.520447 waagent[2039]: 2024-07-02T00:21:50.520045Z INFO ExtHandler ExtHandler OS: flatcar 3975.1.1 Jul 2 00:21:50.520447 waagent[2039]: 2024-07-02T00:21:50.520136Z INFO ExtHandler ExtHandler Python: 3.11.9 Jul 2 00:21:50.563113 waagent[2039]: 2024-07-02T00:21:50.563004Z INFO ExtHandler ExtHandler Distro: flatcar-3975.1.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 2 00:21:50.563363 waagent[2039]: 2024-07-02T00:21:50.563303Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 00:21:50.563468 waagent[2039]: 2024-07-02T00:21:50.563422Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 00:21:50.571079 waagent[2039]: 2024-07-02T00:21:50.571007Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 2 00:21:50.577281 waagent[2039]: 2024-07-02T00:21:50.577231Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.151 Jul 2 00:21:50.577777 waagent[2039]: 2024-07-02T00:21:50.577728Z INFO ExtHandler Jul 2 00:21:50.577861 waagent[2039]: 2024-07-02T00:21:50.577827Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 508382d7-4de7-470a-bebc-bd49dd4cdc26 eTag: 6089941020748215429 source: Fabric] Jul 2 00:21:50.578171 waagent[2039]: 2024-07-02T00:21:50.578126Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 2 00:21:50.578755 waagent[2039]: 2024-07-02T00:21:50.578700Z INFO ExtHandler Jul 2 00:21:50.578839 waagent[2039]: 2024-07-02T00:21:50.578791Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 2 00:21:50.582240 waagent[2039]: 2024-07-02T00:21:50.582196Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 2 00:21:50.660591 waagent[2039]: 2024-07-02T00:21:50.660448Z INFO ExtHandler Downloaded certificate {'thumbprint': 'F53989DB285977E86BB2921F4C82B84C43B1838B', 'hasPrivateKey': True} Jul 2 00:21:50.661002 waagent[2039]: 2024-07-02T00:21:50.660945Z INFO ExtHandler Downloaded certificate {'thumbprint': '6A2690C0B1A6DDC6614417A08D9D5C5BC6383874', 'hasPrivateKey': False} Jul 2 00:21:50.661428 waagent[2039]: 2024-07-02T00:21:50.661379Z INFO ExtHandler Fetch goal state completed Jul 2 00:21:50.676382 waagent[2039]: 2024-07-02T00:21:50.676309Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2039 Jul 2 00:21:50.676552 waagent[2039]: 2024-07-02T00:21:50.676500Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 2 00:21:50.678096 waagent[2039]: 2024-07-02T00:21:50.678037Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3975.1.1', '', 'Flatcar Container Linux by Kinvolk'] Jul 2 00:21:50.678488 waagent[2039]: 2024-07-02T00:21:50.678436Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 2 00:21:50.693630 waagent[2039]: 2024-07-02T00:21:50.693586Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 2 00:21:50.693846 waagent[2039]: 2024-07-02T00:21:50.693798Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 2 00:21:50.701357 waagent[2039]: 2024-07-02T00:21:50.701255Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 2 00:21:50.708784 systemd[1]: Reloading requested from client PID 2054 ('systemctl') (unit waagent.service)... Jul 2 00:21:50.708802 systemd[1]: Reloading... Jul 2 00:21:50.783697 zram_generator::config[2086]: No configuration found. Jul 2 00:21:50.909896 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:21:50.989989 systemd[1]: Reloading finished in 280 ms. Jul 2 00:21:51.016369 waagent[2039]: 2024-07-02T00:21:51.015902Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jul 2 00:21:51.025032 systemd[1]: Reloading requested from client PID 2147 ('systemctl') (unit waagent.service)... Jul 2 00:21:51.025050 systemd[1]: Reloading... Jul 2 00:21:51.092790 zram_generator::config[2175]: No configuration found. Jul 2 00:21:51.228774 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:21:51.308376 systemd[1]: Reloading finished in 282 ms. Jul 2 00:21:51.332171 waagent[2039]: 2024-07-02T00:21:51.331057Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 2 00:21:51.332171 waagent[2039]: 2024-07-02T00:21:51.331297Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 2 00:21:51.548768 waagent[2039]: 2024-07-02T00:21:51.548653Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 2 00:21:51.549463 waagent[2039]: 2024-07-02T00:21:51.549399Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jul 2 00:21:51.550273 waagent[2039]: 2024-07-02T00:21:51.550212Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 2 00:21:51.550419 waagent[2039]: 2024-07-02T00:21:51.550356Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 00:21:51.550893 waagent[2039]: 2024-07-02T00:21:51.550840Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 2 00:21:51.551064 waagent[2039]: 2024-07-02T00:21:51.551017Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 00:21:51.551135 waagent[2039]: 2024-07-02T00:21:51.551090Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 00:21:51.551222 waagent[2039]: 2024-07-02T00:21:51.551179Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 00:21:51.551488 waagent[2039]: 2024-07-02T00:21:51.551439Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 2 00:21:51.551783 waagent[2039]: 2024-07-02T00:21:51.551735Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 2 00:21:51.552026 waagent[2039]: 2024-07-02T00:21:51.551970Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 2 00:21:51.552371 waagent[2039]: 2024-07-02T00:21:51.552319Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 2 00:21:51.552548 waagent[2039]: 2024-07-02T00:21:51.552503Z INFO EnvHandler ExtHandler Configure routes Jul 2 00:21:51.552715 waagent[2039]: 2024-07-02T00:21:51.552671Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 2 00:21:51.553022 waagent[2039]: 2024-07-02T00:21:51.552972Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 2 00:21:51.553022 waagent[2039]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 2 00:21:51.553022 waagent[2039]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jul 2 00:21:51.553022 waagent[2039]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 2 00:21:51.553022 waagent[2039]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 2 00:21:51.553022 waagent[2039]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 00:21:51.553022 waagent[2039]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 00:21:51.553279 waagent[2039]: 2024-07-02T00:21:51.553078Z INFO EnvHandler ExtHandler Gateway:None Jul 2 00:21:51.553279 waagent[2039]: 2024-07-02T00:21:51.553165Z INFO EnvHandler ExtHandler Routes:None Jul 2 00:21:51.553724 waagent[2039]: 2024-07-02T00:21:51.553652Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 2 00:21:51.560202 waagent[2039]: 2024-07-02T00:21:51.560120Z INFO ExtHandler ExtHandler Jul 2 00:21:51.560267 waagent[2039]: 2024-07-02T00:21:51.560223Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 22b902ef-219e-4fd0-b876-b6dbf83c14f7 correlation 9f3c973b-8773-4373-bbd2-8dc48017a0cb created: 2024-07-02T00:20:42.464958Z] Jul 2 00:21:51.561129 waagent[2039]: 2024-07-02T00:21:51.561068Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 2 00:21:51.562010 waagent[2039]: 2024-07-02T00:21:51.561962Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jul 2 00:21:51.594777 waagent[2039]: 2024-07-02T00:21:51.594713Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 8054F68D-D8FA-4FEC-B305-6E44A73B32EE;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jul 2 00:21:51.615479 waagent[2039]: 2024-07-02T00:21:51.615096Z INFO MonitorHandler ExtHandler Network interfaces: Jul 2 00:21:51.615479 waagent[2039]: Executing ['ip', '-a', '-o', 'link']: Jul 2 00:21:51.615479 waagent[2039]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 2 00:21:51.615479 waagent[2039]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:a3:3d:ea brd ff:ff:ff:ff:ff:ff Jul 2 00:21:51.615479 waagent[2039]: 3: enP43880s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:a3:3d:ea brd ff:ff:ff:ff:ff:ff\ altname enP43880p0s2 Jul 2 00:21:51.615479 waagent[2039]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 2 00:21:51.615479 waagent[2039]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 2 00:21:51.615479 waagent[2039]: 2: eth0 inet 10.200.8.10/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 2 00:21:51.615479 waagent[2039]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 2 00:21:51.615479 waagent[2039]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 2 00:21:51.615479 waagent[2039]: 2: eth0 inet6 fe80::222:48ff:fea3:3dea/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 2 00:21:51.615479 waagent[2039]: 3: enP43880s1 inet6 fe80::222:48ff:fea3:3dea/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 2 00:21:51.692241 waagent[2039]: 2024-07-02T00:21:51.692147Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jul 2 00:21:51.692241 waagent[2039]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 00:21:51.692241 waagent[2039]: pkts bytes target prot opt in out source destination Jul 2 00:21:51.692241 waagent[2039]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 2 00:21:51.692241 waagent[2039]: pkts bytes target prot opt in out source destination Jul 2 00:21:51.692241 waagent[2039]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 00:21:51.692241 waagent[2039]: pkts bytes target prot opt in out source destination Jul 2 00:21:51.692241 waagent[2039]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 2 00:21:51.692241 waagent[2039]: 5 645 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 2 00:21:51.692241 waagent[2039]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 2 00:21:51.696051 waagent[2039]: 2024-07-02T00:21:51.695997Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 2 00:21:51.696051 waagent[2039]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 00:21:51.696051 waagent[2039]: pkts bytes target prot opt in out source destination Jul 2 00:21:51.696051 waagent[2039]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 2 00:21:51.696051 waagent[2039]: pkts bytes target prot opt in out source destination Jul 2 00:21:51.696051 waagent[2039]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 00:21:51.696051 waagent[2039]: pkts bytes target prot opt in out source destination Jul 2 00:21:51.696051 waagent[2039]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 2 00:21:51.696051 waagent[2039]: 11 1164 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 2 00:21:51.696051 waagent[2039]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 2 00:21:51.696430 waagent[2039]: 2024-07-02T00:21:51.696296Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 2 00:21:59.566867 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 00:21:59.573917 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:21:59.671868 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:21:59.676889 (kubelet)[2283]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:22:00.241801 kubelet[2283]: E0702 00:22:00.241743 2283 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:22:00.245954 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:22:00.246277 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:22:10.147404 chronyd[1801]: Selected source PHC0 Jul 2 00:22:10.496811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 00:22:10.502885 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:22:10.600849 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:22:10.604725 (kubelet)[2305]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:22:11.169391 kubelet[2305]: E0702 00:22:11.169310 2305 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:22:11.172042 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:22:11.172368 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:22:19.299921 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 00:22:19.304973 systemd[1]: Started sshd@0-10.200.8.10:22-10.200.16.10:43040.service - OpenSSH per-connection server daemon (10.200.16.10:43040). Jul 2 00:22:19.996420 sshd[2315]: Accepted publickey for core from 10.200.16.10 port 43040 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:22:20.000837 sshd[2315]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:20.005413 systemd-logind[1807]: New session 3 of user core. Jul 2 00:22:20.011899 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 00:22:20.562034 systemd[1]: Started sshd@1-10.200.8.10:22-10.200.16.10:43054.service - OpenSSH per-connection server daemon (10.200.16.10:43054). Jul 2 00:22:21.213844 sshd[2320]: Accepted publickey for core from 10.200.16.10 port 43054 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:22:21.215566 sshd[2320]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:21.216693 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 00:22:21.228886 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:22:21.232194 systemd-logind[1807]: New session 4 of user core. Jul 2 00:22:21.252973 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 00:22:21.349838 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:22:21.350098 (kubelet)[2336]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:22:21.671479 sshd[2320]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:21.674499 systemd[1]: sshd@1-10.200.8.10:22-10.200.16.10:43054.service: Deactivated successfully. Jul 2 00:22:21.678770 systemd-logind[1807]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:22:21.679912 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:22:21.681745 systemd-logind[1807]: Removed session 4. Jul 2 00:22:21.782965 systemd[1]: Started sshd@2-10.200.8.10:22-10.200.16.10:43060.service - OpenSSH per-connection server daemon (10.200.16.10:43060). Jul 2 00:22:21.954557 kubelet[2336]: E0702 00:22:21.954420 2336 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:22:21.957358 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:22:21.957721 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:22:22.424531 sshd[2345]: Accepted publickey for core from 10.200.16.10 port 43060 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:22:22.426279 sshd[2345]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:22.432048 systemd-logind[1807]: New session 5 of user core. Jul 2 00:22:22.438915 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 00:22:22.879919 sshd[2345]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:22.885124 systemd-logind[1807]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:22:22.886142 systemd[1]: sshd@2-10.200.8.10:22-10.200.16.10:43060.service: Deactivated successfully. Jul 2 00:22:22.891033 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:22:22.892594 systemd-logind[1807]: Removed session 5. Jul 2 00:22:22.989296 systemd[1]: Started sshd@3-10.200.8.10:22-10.200.16.10:43076.service - OpenSSH per-connection server daemon (10.200.16.10:43076). Jul 2 00:22:23.628121 sshd[2359]: Accepted publickey for core from 10.200.16.10 port 43076 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:22:23.629758 sshd[2359]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:23.634372 systemd-logind[1807]: New session 6 of user core. Jul 2 00:22:23.643926 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 00:22:24.083703 sshd[2359]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:24.087442 systemd[1]: sshd@3-10.200.8.10:22-10.200.16.10:43076.service: Deactivated successfully. Jul 2 00:22:24.092897 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:22:24.093787 systemd-logind[1807]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:22:24.094628 systemd-logind[1807]: Removed session 6. Jul 2 00:22:24.200251 systemd[1]: Started sshd@4-10.200.8.10:22-10.200.16.10:43086.service - OpenSSH per-connection server daemon (10.200.16.10:43086). Jul 2 00:22:24.840832 sshd[2367]: Accepted publickey for core from 10.200.16.10 port 43086 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:22:24.842525 sshd[2367]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:24.847334 systemd-logind[1807]: New session 7 of user core. Jul 2 00:22:24.853901 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 00:22:25.313172 sudo[2371]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 00:22:25.313535 sudo[2371]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:22:25.344007 sudo[2371]: pam_unix(sudo:session): session closed for user root Jul 2 00:22:25.448084 sshd[2367]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:25.453330 systemd[1]: sshd@4-10.200.8.10:22-10.200.16.10:43086.service: Deactivated successfully. Jul 2 00:22:25.458381 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:22:25.459152 systemd-logind[1807]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:22:25.460081 systemd-logind[1807]: Removed session 7. Jul 2 00:22:25.561241 systemd[1]: Started sshd@5-10.200.8.10:22-10.200.16.10:43098.service - OpenSSH per-connection server daemon (10.200.16.10:43098). Jul 2 00:22:26.197408 sshd[2376]: Accepted publickey for core from 10.200.16.10 port 43098 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:22:26.199210 sshd[2376]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:26.204474 systemd-logind[1807]: New session 8 of user core. Jul 2 00:22:26.211152 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 00:22:26.551515 sudo[2381]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 00:22:26.552025 sudo[2381]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:22:26.555404 sudo[2381]: pam_unix(sudo:session): session closed for user root Jul 2 00:22:26.560318 sudo[2380]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 00:22:26.560636 sudo[2380]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:22:26.573007 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 00:22:26.575816 auditctl[2384]: No rules Jul 2 00:22:26.576237 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 00:22:26.576564 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 00:22:26.585342 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:22:26.607163 augenrules[2403]: No rules Jul 2 00:22:26.608901 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:22:26.611334 sudo[2380]: pam_unix(sudo:session): session closed for user root Jul 2 00:22:26.715397 sshd[2376]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:26.720132 systemd[1]: sshd@5-10.200.8.10:22-10.200.16.10:43098.service: Deactivated successfully. Jul 2 00:22:26.724309 systemd-logind[1807]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:22:26.725060 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:22:26.726266 systemd-logind[1807]: Removed session 8. Jul 2 00:22:26.834282 systemd[1]: Started sshd@6-10.200.8.10:22-10.200.16.10:43104.service - OpenSSH per-connection server daemon (10.200.16.10:43104). Jul 2 00:22:27.475122 sshd[2412]: Accepted publickey for core from 10.200.16.10 port 43104 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:22:27.476895 sshd[2412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:27.481932 systemd-logind[1807]: New session 9 of user core. Jul 2 00:22:27.491971 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 00:22:27.830194 sudo[2416]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:22:27.830528 sudo[2416]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:22:28.215947 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 00:22:28.218375 (dockerd)[2425]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 00:22:28.327894 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jul 2 00:22:28.946023 dockerd[2425]: time="2024-07-02T00:22:28.945956332Z" level=info msg="Starting up" Jul 2 00:22:29.050993 dockerd[2425]: time="2024-07-02T00:22:29.050942024Z" level=info msg="Loading containers: start." Jul 2 00:22:29.201737 kernel: Initializing XFRM netlink socket Jul 2 00:22:29.321570 systemd-networkd[1407]: docker0: Link UP Jul 2 00:22:29.352051 dockerd[2425]: time="2024-07-02T00:22:29.352014964Z" level=info msg="Loading containers: done." Jul 2 00:22:29.625840 dockerd[2425]: time="2024-07-02T00:22:29.625728536Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 00:22:29.626719 dockerd[2425]: time="2024-07-02T00:22:29.626004441Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 00:22:29.626719 dockerd[2425]: time="2024-07-02T00:22:29.626136243Z" level=info msg="Daemon has completed initialization" Jul 2 00:22:29.675981 dockerd[2425]: time="2024-07-02T00:22:29.675600588Z" level=info msg="API listen on /run/docker.sock" Jul 2 00:22:29.675842 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 00:22:31.823211 containerd[1837]: time="2024-07-02T00:22:31.823170248Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 00:22:32.160830 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 2 00:22:32.164490 update_engine[1811]: I0702 00:22:32.163707 1811 update_attempter.cc:509] Updating boot flags... Jul 2 00:22:32.166103 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:22:32.242711 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2566) Jul 2 00:22:32.625765 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2566) Jul 2 00:22:32.772945 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:22:32.779127 (kubelet)[2628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:22:32.822127 kubelet[2628]: E0702 00:22:32.822058 2628 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:22:32.824878 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:22:32.825187 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:22:33.051539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4082559731.mount: Deactivated successfully. Jul 2 00:22:34.921454 containerd[1837]: time="2024-07-02T00:22:34.921377762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:34.924477 containerd[1837]: time="2024-07-02T00:22:34.924400225Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=34605186" Jul 2 00:22:34.929059 containerd[1837]: time="2024-07-02T00:22:34.928970421Z" level=info msg="ImageCreate event name:\"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:34.935800 containerd[1837]: time="2024-07-02T00:22:34.935749462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:34.937212 containerd[1837]: time="2024-07-02T00:22:34.936775384Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"34601978\" in 3.113561235s" Jul 2 00:22:34.937212 containerd[1837]: time="2024-07-02T00:22:34.936823685Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jul 2 00:22:34.958297 containerd[1837]: time="2024-07-02T00:22:34.958253833Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 2 00:22:37.236307 containerd[1837]: time="2024-07-02T00:22:37.236182555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:37.240205 containerd[1837]: time="2024-07-02T00:22:37.240148138Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=31719499" Jul 2 00:22:37.243994 containerd[1837]: time="2024-07-02T00:22:37.243939217Z" level=info msg="ImageCreate event name:\"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:37.251015 containerd[1837]: time="2024-07-02T00:22:37.250960664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:37.252335 containerd[1837]: time="2024-07-02T00:22:37.251945584Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"33315989\" in 2.29364345s" Jul 2 00:22:37.252335 containerd[1837]: time="2024-07-02T00:22:37.251986385Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jul 2 00:22:37.275690 containerd[1837]: time="2024-07-02T00:22:37.275639580Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 2 00:22:38.371148 containerd[1837]: time="2024-07-02T00:22:38.371094081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:38.373328 containerd[1837]: time="2024-07-02T00:22:38.373265926Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=16925513" Jul 2 00:22:38.378767 containerd[1837]: time="2024-07-02T00:22:38.378708940Z" level=info msg="ImageCreate event name:\"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:38.384709 containerd[1837]: time="2024-07-02T00:22:38.384645764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:38.385796 containerd[1837]: time="2024-07-02T00:22:38.385644185Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"18522021\" in 1.109937404s" Jul 2 00:22:38.385796 containerd[1837]: time="2024-07-02T00:22:38.385701086Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jul 2 00:22:38.406465 containerd[1837]: time="2024-07-02T00:22:38.406422719Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 00:22:39.720689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2834704139.mount: Deactivated successfully. Jul 2 00:22:40.169028 containerd[1837]: time="2024-07-02T00:22:40.168974127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:40.172125 containerd[1837]: time="2024-07-02T00:22:40.172063573Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28118427" Jul 2 00:22:40.175953 containerd[1837]: time="2024-07-02T00:22:40.175903329Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:40.184024 containerd[1837]: time="2024-07-02T00:22:40.182570327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:40.184024 containerd[1837]: time="2024-07-02T00:22:40.183387939Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 1.776917718s" Jul 2 00:22:40.184024 containerd[1837]: time="2024-07-02T00:22:40.183430739Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jul 2 00:22:40.205482 containerd[1837]: time="2024-07-02T00:22:40.205443762Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 00:22:40.809509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount409611490.mount: Deactivated successfully. Jul 2 00:22:40.838444 containerd[1837]: time="2024-07-02T00:22:40.838393335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:40.841879 containerd[1837]: time="2024-07-02T00:22:40.841807185Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jul 2 00:22:40.846981 containerd[1837]: time="2024-07-02T00:22:40.846895860Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:40.854848 containerd[1837]: time="2024-07-02T00:22:40.854788675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:40.856094 containerd[1837]: time="2024-07-02T00:22:40.855584187Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 650.093425ms" Jul 2 00:22:40.856094 containerd[1837]: time="2024-07-02T00:22:40.855621687Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 00:22:40.876540 containerd[1837]: time="2024-07-02T00:22:40.876499893Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 00:22:41.473116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3201358623.mount: Deactivated successfully. Jul 2 00:22:42.911135 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 2 00:22:42.919934 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:22:43.066024 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:22:43.068836 (kubelet)[2787]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:22:43.147529 kubelet[2787]: E0702 00:22:43.147475 2787 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:22:43.151863 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:22:43.152145 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:22:43.925523 containerd[1837]: time="2024-07-02T00:22:43.925462763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:43.927993 containerd[1837]: time="2024-07-02T00:22:43.927935803Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Jul 2 00:22:43.932004 containerd[1837]: time="2024-07-02T00:22:43.931949268Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:43.936374 containerd[1837]: time="2024-07-02T00:22:43.936343840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:43.937554 containerd[1837]: time="2024-07-02T00:22:43.937411158Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.060873463s" Jul 2 00:22:43.937554 containerd[1837]: time="2024-07-02T00:22:43.937453258Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 00:22:43.958637 containerd[1837]: time="2024-07-02T00:22:43.958602104Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 2 00:22:44.647803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2150181016.mount: Deactivated successfully. Jul 2 00:22:45.189925 containerd[1837]: time="2024-07-02T00:22:45.189874402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:45.205785 containerd[1837]: time="2024-07-02T00:22:45.205718461Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191757" Jul 2 00:22:45.212064 containerd[1837]: time="2024-07-02T00:22:45.211398453Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:45.218832 containerd[1837]: time="2024-07-02T00:22:45.218798674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:45.219508 containerd[1837]: time="2024-07-02T00:22:45.219472285Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.260827081s" Jul 2 00:22:45.219604 containerd[1837]: time="2024-07-02T00:22:45.219514986Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jul 2 00:22:47.556373 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:22:47.563943 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:22:47.590792 systemd[1]: Reloading requested from client PID 2874 ('systemctl') (unit session-9.scope)... Jul 2 00:22:47.590956 systemd[1]: Reloading... Jul 2 00:22:47.708745 zram_generator::config[2914]: No configuration found. Jul 2 00:22:47.840887 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:22:47.919994 systemd[1]: Reloading finished in 328 ms. Jul 2 00:22:47.963639 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 00:22:47.963960 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 00:22:47.964475 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:22:47.971042 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:22:48.200865 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:22:48.210038 (kubelet)[2993]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:22:48.813471 kubelet[2993]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:22:48.813471 kubelet[2993]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:22:48.813471 kubelet[2993]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:22:48.814096 kubelet[2993]: I0702 00:22:48.813551 2993 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:22:49.269854 kubelet[2993]: I0702 00:22:49.269817 2993 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 00:22:49.269854 kubelet[2993]: I0702 00:22:49.269849 2993 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:22:49.270156 kubelet[2993]: I0702 00:22:49.270133 2993 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 00:22:49.285568 kubelet[2993]: I0702 00:22:49.285535 2993 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:22:49.287472 kubelet[2993]: E0702 00:22:49.287445 2993 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 00:22:49.301131 kubelet[2993]: I0702 00:22:49.301106 2993 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:22:49.302917 kubelet[2993]: I0702 00:22:49.302885 2993 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:22:49.303122 kubelet[2993]: I0702 00:22:49.303097 2993 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:22:49.303627 kubelet[2993]: I0702 00:22:49.303604 2993 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:22:49.303627 kubelet[2993]: I0702 00:22:49.303630 2993 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:22:49.304451 kubelet[2993]: I0702 00:22:49.304425 2993 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:22:49.306449 kubelet[2993]: I0702 00:22:49.306427 2993 kubelet.go:393] "Attempting to sync node with API server" Jul 2 00:22:49.306551 kubelet[2993]: I0702 00:22:49.306455 2993 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:22:49.306551 kubelet[2993]: I0702 00:22:49.306489 2993 kubelet.go:309] "Adding apiserver pod source" Jul 2 00:22:49.306551 kubelet[2993]: I0702 00:22:49.306518 2993 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:22:49.309542 kubelet[2993]: W0702 00:22:49.309045 2993 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-a-106c6d4ee2&limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 00:22:49.309542 kubelet[2993]: E0702 00:22:49.309122 2993 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-a-106c6d4ee2&limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 00:22:49.309542 kubelet[2993]: I0702 00:22:49.309244 2993 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:22:49.311681 kubelet[2993]: W0702 00:22:49.311125 2993 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:22:49.311818 kubelet[2993]: I0702 00:22:49.311805 2993 server.go:1232] "Started kubelet" Jul 2 00:22:49.315385 kubelet[2993]: W0702 00:22:49.315339 2993 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 00:22:49.315473 kubelet[2993]: E0702 00:22:49.315395 2993 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 00:22:49.316183 kubelet[2993]: E0702 00:22:49.315452 2993 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3975.1.1-a-106c6d4ee2.17de3d89b2359396", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3975.1.1-a-106c6d4ee2", UID:"ci-3975.1.1-a-106c6d4ee2", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3975.1.1-a-106c6d4ee2"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 22, 49, 311777686, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 22, 49, 311777686, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3975.1.1-a-106c6d4ee2"}': 'Post "https://10.200.8.10:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.10:6443: connect: connection refused'(may retry after sleeping) Jul 2 00:22:49.316434 kubelet[2993]: I0702 00:22:49.316341 2993 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 00:22:49.316636 kubelet[2993]: E0702 00:22:49.316579 2993 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 00:22:49.316636 kubelet[2993]: E0702 00:22:49.316606 2993 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:22:49.316636 kubelet[2993]: I0702 00:22:49.316632 2993 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:22:49.316796 kubelet[2993]: I0702 00:22:49.316258 2993 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:22:49.319080 kubelet[2993]: I0702 00:22:49.316308 2993 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:22:49.319080 kubelet[2993]: I0702 00:22:49.318366 2993 server.go:462] "Adding debug handlers to kubelet server" Jul 2 00:22:49.322949 kubelet[2993]: I0702 00:22:49.322921 2993 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:22:49.324735 kubelet[2993]: E0702 00:22:49.324717 2993 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-106c6d4ee2?timeout=10s\": dial tcp 10.200.8.10:6443: connect: connection refused" interval="200ms" Jul 2 00:22:49.326020 kubelet[2993]: I0702 00:22:49.326001 2993 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:22:49.326087 kubelet[2993]: I0702 00:22:49.326074 2993 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:22:49.337400 kubelet[2993]: W0702 00:22:49.337287 2993 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 00:22:49.337400 kubelet[2993]: E0702 00:22:49.337358 2993 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 00:22:49.371230 kubelet[2993]: I0702 00:22:49.371203 2993 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:22:49.372755 kubelet[2993]: I0702 00:22:49.372707 2993 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:22:49.372755 kubelet[2993]: I0702 00:22:49.372731 2993 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:22:49.372998 kubelet[2993]: I0702 00:22:49.372941 2993 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 00:22:49.373150 kubelet[2993]: E0702 00:22:49.373092 2993 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:22:49.373915 kubelet[2993]: W0702 00:22:49.373875 2993 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 00:22:49.375146 kubelet[2993]: E0702 00:22:49.374261 2993 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 00:22:49.380815 kubelet[2993]: I0702 00:22:49.380800 2993 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:22:49.380905 kubelet[2993]: I0702 00:22:49.380850 2993 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:22:49.380905 kubelet[2993]: I0702 00:22:49.380874 2993 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:22:49.386243 kubelet[2993]: I0702 00:22:49.386220 2993 policy_none.go:49] "None policy: Start" Jul 2 00:22:49.387081 kubelet[2993]: I0702 00:22:49.386816 2993 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 00:22:49.387081 kubelet[2993]: I0702 00:22:49.386844 2993 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:22:49.395678 kubelet[2993]: I0702 00:22:49.394934 2993 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:22:49.395678 kubelet[2993]: I0702 00:22:49.395243 2993 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:22:49.398498 kubelet[2993]: E0702 00:22:49.398469 2993 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3975.1.1-a-106c6d4ee2\" not found" Jul 2 00:22:49.425499 kubelet[2993]: I0702 00:22:49.425448 2993 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:49.425904 kubelet[2993]: E0702 00:22:49.425884 2993 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.10:6443/api/v1/nodes\": dial tcp 10.200.8.10:6443: connect: connection refused" node="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:49.474082 kubelet[2993]: I0702 00:22:49.474047 2993 topology_manager.go:215] "Topology Admit Handler" podUID="e8d864bcfc785c8c95d84a87dc393b85" podNamespace="kube-system" podName="kube-apiserver-ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:49.476091 kubelet[2993]: I0702 00:22:49.476056 2993 topology_manager.go:215] "Topology Admit Handler" podUID="e6f0c8d0b9167096a5f0e06d2e6439b7" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:49.477985 kubelet[2993]: I0702 00:22:49.477725 2993 topology_manager.go:215] "Topology Admit Handler" podUID="647785e9045bcccca8d0633ae7cc071c" podNamespace="kube-system" podName="kube-scheduler-ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:49.526359 kubelet[2993]: E0702 00:22:49.526223 2993 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-106c6d4ee2?timeout=10s\": dial tcp 10.200.8.10:6443: connect: connection refused" interval="400ms" Jul 2 00:22:49.527511 kubelet[2993]: I0702 00:22:49.527483 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e8d864bcfc785c8c95d84a87dc393b85-ca-certs\") pod \"kube-apiserver-ci-3975.1.1-a-106c6d4ee2\" (UID: \"e8d864bcfc785c8c95d84a87dc393b85\") " pod="kube-system/kube-apiserver-ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:49.527511 kubelet[2993]: I0702 00:22:49.527575 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e8d864bcfc785c8c95d84a87dc393b85-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.1.1-a-106c6d4ee2\" (UID: \"e8d864bcfc785c8c95d84a87dc393b85\") " pod="kube-system/kube-apiserver-ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:49.527987 kubelet[2993]: I0702 00:22:49.527853 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e6f0c8d0b9167096a5f0e06d2e6439b7-ca-certs\") pod \"kube-controller-manager-ci-3975.1.1-a-106c6d4ee2\" (UID: \"e6f0c8d0b9167096a5f0e06d2e6439b7\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:49.527987 kubelet[2993]: I0702 00:22:49.527930 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e6f0c8d0b9167096a5f0e06d2e6439b7-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.1.1-a-106c6d4ee2\" (UID: \"e6f0c8d0b9167096a5f0e06d2e6439b7\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:49.528280 kubelet[2993]: I0702 00:22:49.528128 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e6f0c8d0b9167096a5f0e06d2e6439b7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.1.1-a-106c6d4ee2\" (UID: \"e6f0c8d0b9167096a5f0e06d2e6439b7\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:49.528280 kubelet[2993]: I0702 00:22:49.528175 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e8d864bcfc785c8c95d84a87dc393b85-k8s-certs\") pod \"kube-apiserver-ci-3975.1.1-a-106c6d4ee2\" (UID: \"e8d864bcfc785c8c95d84a87dc393b85\") " pod="kube-system/kube-apiserver-ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:49.528280 kubelet[2993]: I0702 00:22:49.528227 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e6f0c8d0b9167096a5f0e06d2e6439b7-k8s-certs\") pod \"kube-controller-manager-ci-3975.1.1-a-106c6d4ee2\" (UID: \"e6f0c8d0b9167096a5f0e06d2e6439b7\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:49.528280 kubelet[2993]: I0702 00:22:49.528266 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e6f0c8d0b9167096a5f0e06d2e6439b7-kubeconfig\") pod \"kube-controller-manager-ci-3975.1.1-a-106c6d4ee2\" (UID: \"e6f0c8d0b9167096a5f0e06d2e6439b7\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:49.528710 kubelet[2993]: I0702 00:22:49.528556 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/647785e9045bcccca8d0633ae7cc071c-kubeconfig\") pod \"kube-scheduler-ci-3975.1.1-a-106c6d4ee2\" (UID: \"647785e9045bcccca8d0633ae7cc071c\") " pod="kube-system/kube-scheduler-ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:49.629250 kubelet[2993]: I0702 00:22:49.628649 2993 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:49.629442 kubelet[2993]: E0702 00:22:49.629372 2993 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.10:6443/api/v1/nodes\": dial tcp 10.200.8.10:6443: connect: connection refused" node="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:49.782286 containerd[1837]: time="2024-07-02T00:22:49.782130364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.1.1-a-106c6d4ee2,Uid:e8d864bcfc785c8c95d84a87dc393b85,Namespace:kube-system,Attempt:0,}" Jul 2 00:22:49.783610 containerd[1837]: time="2024-07-02T00:22:49.783567687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.1.1-a-106c6d4ee2,Uid:e6f0c8d0b9167096a5f0e06d2e6439b7,Namespace:kube-system,Attempt:0,}" Jul 2 00:22:49.787125 containerd[1837]: time="2024-07-02T00:22:49.787092545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.1.1-a-106c6d4ee2,Uid:647785e9045bcccca8d0633ae7cc071c,Namespace:kube-system,Attempt:0,}" Jul 2 00:22:49.926931 kubelet[2993]: E0702 00:22:49.926898 2993 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-106c6d4ee2?timeout=10s\": dial tcp 10.200.8.10:6443: connect: connection refused" interval="800ms" Jul 2 00:22:50.032278 kubelet[2993]: I0702 00:22:50.032230 2993 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:50.032740 kubelet[2993]: E0702 00:22:50.032646 2993 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.10:6443/api/v1/nodes\": dial tcp 10.200.8.10:6443: connect: connection refused" node="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:50.128334 kubelet[2993]: W0702 00:22:50.128279 2993 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-a-106c6d4ee2&limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 00:22:50.128334 kubelet[2993]: E0702 00:22:50.128341 2993 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-a-106c6d4ee2&limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 00:22:50.427142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3929024537.mount: Deactivated successfully. Jul 2 00:22:50.466954 containerd[1837]: time="2024-07-02T00:22:50.466897121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:22:50.471304 containerd[1837]: time="2024-07-02T00:22:50.471217404Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jul 2 00:22:50.476497 containerd[1837]: time="2024-07-02T00:22:50.476460606Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:22:50.480832 containerd[1837]: time="2024-07-02T00:22:50.480799589Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:22:50.484109 containerd[1837]: time="2024-07-02T00:22:50.484058852Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:22:50.488506 containerd[1837]: time="2024-07-02T00:22:50.488465337Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:22:50.492436 containerd[1837]: time="2024-07-02T00:22:50.492159908Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:22:50.498181 containerd[1837]: time="2024-07-02T00:22:50.498149524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:22:50.498936 containerd[1837]: time="2024-07-02T00:22:50.498899438Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 715.23815ms" Jul 2 00:22:50.499893 containerd[1837]: time="2024-07-02T00:22:50.499857757Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 717.594391ms" Jul 2 00:22:50.507165 containerd[1837]: time="2024-07-02T00:22:50.507124397Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 719.955351ms" Jul 2 00:22:50.529136 kubelet[2993]: W0702 00:22:50.529101 2993 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 00:22:50.529136 kubelet[2993]: E0702 00:22:50.529143 2993 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 00:22:50.542990 kubelet[2993]: W0702 00:22:50.542846 2993 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 00:22:50.542990 kubelet[2993]: E0702 00:22:50.542919 2993 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 00:22:50.727903 kubelet[2993]: E0702 00:22:50.727786 2993 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-106c6d4ee2?timeout=10s\": dial tcp 10.200.8.10:6443: connect: connection refused" interval="1.6s" Jul 2 00:22:50.811034 kubelet[2993]: W0702 00:22:50.810943 2993 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 00:22:50.811034 kubelet[2993]: E0702 00:22:50.811003 2993 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 00:22:50.835181 kubelet[2993]: I0702 00:22:50.835145 2993 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:50.835512 kubelet[2993]: E0702 00:22:50.835488 2993 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.10:6443/api/v1/nodes\": dial tcp 10.200.8.10:6443: connect: connection refused" node="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:51.118019 containerd[1837]: time="2024-07-02T00:22:51.117816778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:22:51.118019 containerd[1837]: time="2024-07-02T00:22:51.117885379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:51.119331 containerd[1837]: time="2024-07-02T00:22:51.118184985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:22:51.119331 containerd[1837]: time="2024-07-02T00:22:51.118525591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:51.119331 containerd[1837]: time="2024-07-02T00:22:51.118562692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:22:51.119331 containerd[1837]: time="2024-07-02T00:22:51.118583792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:51.120226 containerd[1837]: time="2024-07-02T00:22:51.117911880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:22:51.120226 containerd[1837]: time="2024-07-02T00:22:51.119564611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:51.120399 containerd[1837]: time="2024-07-02T00:22:51.119756315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:22:51.120399 containerd[1837]: time="2024-07-02T00:22:51.119808516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:51.120399 containerd[1837]: time="2024-07-02T00:22:51.119834217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:22:51.120399 containerd[1837]: time="2024-07-02T00:22:51.119854117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:51.228937 containerd[1837]: time="2024-07-02T00:22:51.228894020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.1.1-a-106c6d4ee2,Uid:e8d864bcfc785c8c95d84a87dc393b85,Namespace:kube-system,Attempt:0,} returns sandbox id \"9bf6c14799eebb40ada5e6ecbab79fc60b8b7c69785c69813ed67c5e8cb0abe3\"" Jul 2 00:22:51.233689 containerd[1837]: time="2024-07-02T00:22:51.233619312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.1.1-a-106c6d4ee2,Uid:e6f0c8d0b9167096a5f0e06d2e6439b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"50f7d34da03706f296bebeffabfb22c7b94dd78b88f5821c2a638ae0957e2a35\"" Jul 2 00:22:51.240840 containerd[1837]: time="2024-07-02T00:22:51.240799150Z" level=info msg="CreateContainer within sandbox \"9bf6c14799eebb40ada5e6ecbab79fc60b8b7c69785c69813ed67c5e8cb0abe3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 00:22:51.241167 containerd[1837]: time="2024-07-02T00:22:51.240997954Z" level=info msg="CreateContainer within sandbox \"50f7d34da03706f296bebeffabfb22c7b94dd78b88f5821c2a638ae0957e2a35\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 00:22:51.248857 containerd[1837]: time="2024-07-02T00:22:51.248830605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.1.1-a-106c6d4ee2,Uid:647785e9045bcccca8d0633ae7cc071c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7123546f7793ed1687ecea72ccf14b60c045032d1b8371dc804de38cf6bbc0d4\"" Jul 2 00:22:51.251201 containerd[1837]: time="2024-07-02T00:22:51.251164050Z" level=info msg="CreateContainer within sandbox \"7123546f7793ed1687ecea72ccf14b60c045032d1b8371dc804de38cf6bbc0d4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 00:22:51.319860 containerd[1837]: time="2024-07-02T00:22:51.319790774Z" level=info msg="CreateContainer within sandbox \"50f7d34da03706f296bebeffabfb22c7b94dd78b88f5821c2a638ae0957e2a35\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4735e2e21bd46667cd513ea06101b747e0955efce6ea875a65c49217283e2710\"" Jul 2 00:22:51.320533 containerd[1837]: time="2024-07-02T00:22:51.320502388Z" level=info msg="StartContainer for \"4735e2e21bd46667cd513ea06101b747e0955efce6ea875a65c49217283e2710\"" Jul 2 00:22:51.342819 containerd[1837]: time="2024-07-02T00:22:51.342705116Z" level=info msg="CreateContainer within sandbox \"9bf6c14799eebb40ada5e6ecbab79fc60b8b7c69785c69813ed67c5e8cb0abe3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d9a9677901a6643fb979a5f66dc6ad268662bdabc3d461fc69676bb006296d6d\"" Jul 2 00:22:51.347482 containerd[1837]: time="2024-07-02T00:22:51.347083900Z" level=info msg="StartContainer for \"d9a9677901a6643fb979a5f66dc6ad268662bdabc3d461fc69676bb006296d6d\"" Jul 2 00:22:51.349341 containerd[1837]: time="2024-07-02T00:22:51.349043238Z" level=info msg="CreateContainer within sandbox \"7123546f7793ed1687ecea72ccf14b60c045032d1b8371dc804de38cf6bbc0d4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"426aee5ae424802a6664e34568cea9cb08838174d20e5d284cbdf59f77ea62b9\"" Jul 2 00:22:51.356353 containerd[1837]: time="2024-07-02T00:22:51.350387064Z" level=info msg="StartContainer for \"426aee5ae424802a6664e34568cea9cb08838174d20e5d284cbdf59f77ea62b9\"" Jul 2 00:22:51.358283 kubelet[2993]: E0702 00:22:51.358262 2993 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 00:22:51.467074 containerd[1837]: time="2024-07-02T00:22:51.467028114Z" level=info msg="StartContainer for \"4735e2e21bd46667cd513ea06101b747e0955efce6ea875a65c49217283e2710\" returns successfully" Jul 2 00:22:51.500682 containerd[1837]: time="2024-07-02T00:22:51.500629262Z" level=info msg="StartContainer for \"d9a9677901a6643fb979a5f66dc6ad268662bdabc3d461fc69676bb006296d6d\" returns successfully" Jul 2 00:22:51.556196 containerd[1837]: time="2024-07-02T00:22:51.555652224Z" level=info msg="StartContainer for \"426aee5ae424802a6664e34568cea9cb08838174d20e5d284cbdf59f77ea62b9\" returns successfully" Jul 2 00:22:52.439206 kubelet[2993]: I0702 00:22:52.439172 2993 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:53.534850 kubelet[2993]: E0702 00:22:53.534809 2993 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3975.1.1-a-106c6d4ee2\" not found" node="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:53.541220 kubelet[2993]: I0702 00:22:53.541188 2993 kubelet_node_status.go:73] "Successfully registered node" node="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:54.310436 kubelet[2993]: I0702 00:22:54.310403 2993 apiserver.go:52] "Watching apiserver" Jul 2 00:22:54.326919 kubelet[2993]: I0702 00:22:54.326882 2993 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:22:54.428853 kubelet[2993]: W0702 00:22:54.428804 2993 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:22:56.427610 systemd[1]: Reloading requested from client PID 3270 ('systemctl') (unit session-9.scope)... Jul 2 00:22:56.427626 systemd[1]: Reloading... Jul 2 00:22:56.522690 zram_generator::config[3307]: No configuration found. Jul 2 00:22:56.655099 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:22:56.766433 systemd[1]: Reloading finished in 338 ms. Jul 2 00:22:56.812883 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:22:56.813511 kubelet[2993]: I0702 00:22:56.813387 2993 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:22:56.829365 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:22:56.829825 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:22:56.838977 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:22:57.059835 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:22:57.071077 (kubelet)[3384]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:22:57.113935 kubelet[3384]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:22:57.113935 kubelet[3384]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:22:57.113935 kubelet[3384]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:22:57.115565 kubelet[3384]: I0702 00:22:57.114045 3384 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:22:57.119832 kubelet[3384]: I0702 00:22:57.119798 3384 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 00:22:57.119832 kubelet[3384]: I0702 00:22:57.119826 3384 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:22:57.120060 kubelet[3384]: I0702 00:22:57.120038 3384 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 00:22:57.121391 kubelet[3384]: I0702 00:22:57.121367 3384 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:22:57.122391 kubelet[3384]: I0702 00:22:57.122255 3384 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:22:57.128319 kubelet[3384]: I0702 00:22:57.128258 3384 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:22:57.128735 kubelet[3384]: I0702 00:22:57.128714 3384 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:22:57.128903 kubelet[3384]: I0702 00:22:57.128884 3384 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:22:57.129028 kubelet[3384]: I0702 00:22:57.128911 3384 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:22:57.129028 kubelet[3384]: I0702 00:22:57.128925 3384 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:22:57.129028 kubelet[3384]: I0702 00:22:57.128970 3384 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:22:57.130644 kubelet[3384]: I0702 00:22:57.129072 3384 kubelet.go:393] "Attempting to sync node with API server" Jul 2 00:22:57.130644 kubelet[3384]: I0702 00:22:57.129091 3384 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:22:57.130644 kubelet[3384]: I0702 00:22:57.129119 3384 kubelet.go:309] "Adding apiserver pod source" Jul 2 00:22:57.130644 kubelet[3384]: I0702 00:22:57.129137 3384 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:22:57.131120 kubelet[3384]: I0702 00:22:57.131106 3384 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:22:57.131766 kubelet[3384]: I0702 00:22:57.131751 3384 server.go:1232] "Started kubelet" Jul 2 00:22:57.137689 kubelet[3384]: I0702 00:22:57.136945 3384 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:22:57.142910 kubelet[3384]: I0702 00:22:57.142892 3384 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:22:57.145026 kubelet[3384]: I0702 00:22:57.145008 3384 server.go:462] "Adding debug handlers to kubelet server" Jul 2 00:22:57.148060 kubelet[3384]: I0702 00:22:57.148037 3384 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:22:57.148507 kubelet[3384]: I0702 00:22:57.148490 3384 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 00:22:57.148798 kubelet[3384]: I0702 00:22:57.148784 3384 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:22:57.150002 kubelet[3384]: I0702 00:22:57.149982 3384 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:22:57.150148 kubelet[3384]: I0702 00:22:57.150135 3384 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:22:57.160433 kubelet[3384]: I0702 00:22:57.160411 3384 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:22:57.163218 kubelet[3384]: I0702 00:22:57.163198 3384 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:22:57.163218 kubelet[3384]: I0702 00:22:57.163222 3384 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:22:57.163423 kubelet[3384]: I0702 00:22:57.163243 3384 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 00:22:57.163752 kubelet[3384]: E0702 00:22:57.163737 3384 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:22:57.165887 kubelet[3384]: E0702 00:22:57.165862 3384 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 00:22:57.165966 kubelet[3384]: E0702 00:22:57.165897 3384 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:22:57.246569 kubelet[3384]: I0702 00:22:57.246538 3384 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:22:57.246569 kubelet[3384]: I0702 00:22:57.246560 3384 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:22:57.246569 kubelet[3384]: I0702 00:22:57.246580 3384 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:22:57.246914 kubelet[3384]: I0702 00:22:57.246893 3384 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:22:57.247029 kubelet[3384]: I0702 00:22:57.246926 3384 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:22:57.247029 kubelet[3384]: I0702 00:22:57.246935 3384 policy_none.go:49] "None policy: Start" Jul 2 00:22:57.247640 kubelet[3384]: I0702 00:22:57.247617 3384 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 00:22:57.247640 kubelet[3384]: I0702 00:22:57.247645 3384 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:22:57.247894 kubelet[3384]: I0702 00:22:57.247876 3384 state_mem.go:75] "Updated machine memory state" Jul 2 00:22:57.249867 kubelet[3384]: I0702 00:22:57.249226 3384 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:22:57.249867 kubelet[3384]: I0702 00:22:57.249478 3384 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:22:57.255582 kubelet[3384]: I0702 00:22:57.255560 3384 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:57.264951 kubelet[3384]: I0702 00:22:57.264914 3384 topology_manager.go:215] "Topology Admit Handler" podUID="e8d864bcfc785c8c95d84a87dc393b85" podNamespace="kube-system" podName="kube-apiserver-ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:57.265063 kubelet[3384]: I0702 00:22:57.265032 3384 topology_manager.go:215] "Topology Admit Handler" podUID="e6f0c8d0b9167096a5f0e06d2e6439b7" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:57.265114 kubelet[3384]: I0702 00:22:57.265084 3384 topology_manager.go:215] "Topology Admit Handler" podUID="647785e9045bcccca8d0633ae7cc071c" podNamespace="kube-system" podName="kube-scheduler-ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:57.274040 kubelet[3384]: W0702 00:22:57.274023 3384 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:22:57.278739 kubelet[3384]: W0702 00:22:57.278722 3384 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:22:57.279128 kubelet[3384]: I0702 00:22:57.279108 3384 kubelet_node_status.go:108] "Node was previously registered" node="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:57.279243 kubelet[3384]: I0702 00:22:57.279236 3384 kubelet_node_status.go:73] "Successfully registered node" node="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:57.292467 kubelet[3384]: W0702 00:22:57.292445 3384 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:22:57.292836 kubelet[3384]: E0702 00:22:57.292753 3384 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3975.1.1-a-106c6d4ee2\" already exists" pod="kube-system/kube-apiserver-ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:57.451150 kubelet[3384]: I0702 00:22:57.451113 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e6f0c8d0b9167096a5f0e06d2e6439b7-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.1.1-a-106c6d4ee2\" (UID: \"e6f0c8d0b9167096a5f0e06d2e6439b7\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:57.451413 kubelet[3384]: I0702 00:22:57.451166 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e6f0c8d0b9167096a5f0e06d2e6439b7-kubeconfig\") pod \"kube-controller-manager-ci-3975.1.1-a-106c6d4ee2\" (UID: \"e6f0c8d0b9167096a5f0e06d2e6439b7\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:57.451413 kubelet[3384]: I0702 00:22:57.451197 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e6f0c8d0b9167096a5f0e06d2e6439b7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.1.1-a-106c6d4ee2\" (UID: \"e6f0c8d0b9167096a5f0e06d2e6439b7\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:57.451413 kubelet[3384]: I0702 00:22:57.451226 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e8d864bcfc785c8c95d84a87dc393b85-ca-certs\") pod \"kube-apiserver-ci-3975.1.1-a-106c6d4ee2\" (UID: \"e8d864bcfc785c8c95d84a87dc393b85\") " pod="kube-system/kube-apiserver-ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:57.451413 kubelet[3384]: I0702 00:22:57.451255 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e8d864bcfc785c8c95d84a87dc393b85-k8s-certs\") pod \"kube-apiserver-ci-3975.1.1-a-106c6d4ee2\" (UID: \"e8d864bcfc785c8c95d84a87dc393b85\") " pod="kube-system/kube-apiserver-ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:57.451413 kubelet[3384]: I0702 00:22:57.451284 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e6f0c8d0b9167096a5f0e06d2e6439b7-ca-certs\") pod \"kube-controller-manager-ci-3975.1.1-a-106c6d4ee2\" (UID: \"e6f0c8d0b9167096a5f0e06d2e6439b7\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:57.451806 kubelet[3384]: I0702 00:22:57.451314 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e8d864bcfc785c8c95d84a87dc393b85-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.1.1-a-106c6d4ee2\" (UID: \"e8d864bcfc785c8c95d84a87dc393b85\") " pod="kube-system/kube-apiserver-ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:57.451806 kubelet[3384]: I0702 00:22:57.451339 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e6f0c8d0b9167096a5f0e06d2e6439b7-k8s-certs\") pod \"kube-controller-manager-ci-3975.1.1-a-106c6d4ee2\" (UID: \"e6f0c8d0b9167096a5f0e06d2e6439b7\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:57.451806 kubelet[3384]: I0702 00:22:57.451364 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/647785e9045bcccca8d0633ae7cc071c-kubeconfig\") pod \"kube-scheduler-ci-3975.1.1-a-106c6d4ee2\" (UID: \"647785e9045bcccca8d0633ae7cc071c\") " pod="kube-system/kube-scheduler-ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:58.132850 kubelet[3384]: I0702 00:22:58.131263 3384 apiserver.go:52] "Watching apiserver" Jul 2 00:22:58.150546 kubelet[3384]: I0702 00:22:58.150479 3384 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:22:58.224963 kubelet[3384]: W0702 00:22:58.224928 3384 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:22:58.225103 kubelet[3384]: E0702 00:22:58.225018 3384 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3975.1.1-a-106c6d4ee2\" already exists" pod="kube-system/kube-apiserver-ci-3975.1.1-a-106c6d4ee2" Jul 2 00:22:58.303252 kubelet[3384]: I0702 00:22:58.303201 3384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3975.1.1-a-106c6d4ee2" podStartSLOduration=1.3031416519999999 podCreationTimestamp="2024-07-02 00:22:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:22:58.288209647 +0000 UTC m=+1.213368356" watchObservedRunningTime="2024-07-02 00:22:58.303141652 +0000 UTC m=+1.228300361" Jul 2 00:22:58.312344 kubelet[3384]: I0702 00:22:58.312310 3384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3975.1.1-a-106c6d4ee2" podStartSLOduration=1.3122683 podCreationTimestamp="2024-07-02 00:22:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:22:58.30450809 +0000 UTC m=+1.229666799" watchObservedRunningTime="2024-07-02 00:22:58.3122683 +0000 UTC m=+1.237426909" Jul 2 00:22:58.329858 kubelet[3384]: I0702 00:22:58.329378 3384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3975.1.1-a-106c6d4ee2" podStartSLOduration=4.329331964 podCreationTimestamp="2024-07-02 00:22:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:22:58.313162325 +0000 UTC m=+1.238321034" watchObservedRunningTime="2024-07-02 00:22:58.329331964 +0000 UTC m=+1.254490573" Jul 2 00:23:01.256535 sudo[2416]: pam_unix(sudo:session): session closed for user root Jul 2 00:23:01.362032 sshd[2412]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:01.367779 systemd[1]: sshd@6-10.200.8.10:22-10.200.16.10:43104.service: Deactivated successfully. Jul 2 00:23:01.371343 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:23:01.372382 systemd-logind[1807]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:23:01.373529 systemd-logind[1807]: Removed session 9. Jul 2 00:23:08.808148 kubelet[3384]: I0702 00:23:08.808043 3384 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:23:08.810775 containerd[1837]: time="2024-07-02T00:23:08.808827506Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:23:08.813974 kubelet[3384]: I0702 00:23:08.809133 3384 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:23:09.714172 kubelet[3384]: I0702 00:23:09.714105 3384 topology_manager.go:215] "Topology Admit Handler" podUID="395a746d-4a65-40e0-a18f-b9cec602d1fc" podNamespace="kube-system" podName="kube-proxy-kn8k7" Jul 2 00:23:09.738880 kubelet[3384]: I0702 00:23:09.738846 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/395a746d-4a65-40e0-a18f-b9cec602d1fc-kube-proxy\") pod \"kube-proxy-kn8k7\" (UID: \"395a746d-4a65-40e0-a18f-b9cec602d1fc\") " pod="kube-system/kube-proxy-kn8k7" Jul 2 00:23:09.739086 kubelet[3384]: I0702 00:23:09.738898 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/395a746d-4a65-40e0-a18f-b9cec602d1fc-xtables-lock\") pod \"kube-proxy-kn8k7\" (UID: \"395a746d-4a65-40e0-a18f-b9cec602d1fc\") " pod="kube-system/kube-proxy-kn8k7" Jul 2 00:23:09.739086 kubelet[3384]: I0702 00:23:09.738930 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/395a746d-4a65-40e0-a18f-b9cec602d1fc-lib-modules\") pod \"kube-proxy-kn8k7\" (UID: \"395a746d-4a65-40e0-a18f-b9cec602d1fc\") " pod="kube-system/kube-proxy-kn8k7" Jul 2 00:23:09.739086 kubelet[3384]: I0702 00:23:09.738964 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kp2g\" (UniqueName: \"kubernetes.io/projected/395a746d-4a65-40e0-a18f-b9cec602d1fc-kube-api-access-2kp2g\") pod \"kube-proxy-kn8k7\" (UID: \"395a746d-4a65-40e0-a18f-b9cec602d1fc\") " pod="kube-system/kube-proxy-kn8k7" Jul 2 00:23:09.811209 kubelet[3384]: I0702 00:23:09.811160 3384 topology_manager.go:215] "Topology Admit Handler" podUID="1fab52b9-f219-4c41-934e-91f28b4d128c" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-bz5nh" Jul 2 00:23:09.839883 kubelet[3384]: I0702 00:23:09.839851 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qw25\" (UniqueName: \"kubernetes.io/projected/1fab52b9-f219-4c41-934e-91f28b4d128c-kube-api-access-4qw25\") pod \"tigera-operator-76c4974c85-bz5nh\" (UID: \"1fab52b9-f219-4c41-934e-91f28b4d128c\") " pod="tigera-operator/tigera-operator-76c4974c85-bz5nh" Jul 2 00:23:09.840279 kubelet[3384]: I0702 00:23:09.840103 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1fab52b9-f219-4c41-934e-91f28b4d128c-var-lib-calico\") pod \"tigera-operator-76c4974c85-bz5nh\" (UID: \"1fab52b9-f219-4c41-934e-91f28b4d128c\") " pod="tigera-operator/tigera-operator-76c4974c85-bz5nh" Jul 2 00:23:10.028470 containerd[1837]: time="2024-07-02T00:23:10.028333098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kn8k7,Uid:395a746d-4a65-40e0-a18f-b9cec602d1fc,Namespace:kube-system,Attempt:0,}" Jul 2 00:23:10.076310 containerd[1837]: time="2024-07-02T00:23:10.076130394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:23:10.076310 containerd[1837]: time="2024-07-02T00:23:10.076187295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:10.076310 containerd[1837]: time="2024-07-02T00:23:10.076213395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:23:10.076310 containerd[1837]: time="2024-07-02T00:23:10.076235396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:10.121594 containerd[1837]: time="2024-07-02T00:23:10.121156543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kn8k7,Uid:395a746d-4a65-40e0-a18f-b9cec602d1fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8ee74d5a162c3cd10432c9b21dc4ba1ed5054005c8daaabc2c1353d25a68095\"" Jul 2 00:23:10.121594 containerd[1837]: time="2024-07-02T00:23:10.121185244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-bz5nh,Uid:1fab52b9-f219-4c41-934e-91f28b4d128c,Namespace:tigera-operator,Attempt:0,}" Jul 2 00:23:10.126432 containerd[1837]: time="2024-07-02T00:23:10.126393430Z" level=info msg="CreateContainer within sandbox \"b8ee74d5a162c3cd10432c9b21dc4ba1ed5054005c8daaabc2c1353d25a68095\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:23:10.187244 containerd[1837]: time="2024-07-02T00:23:10.186928637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:23:10.187244 containerd[1837]: time="2024-07-02T00:23:10.186994239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:10.187244 containerd[1837]: time="2024-07-02T00:23:10.187078640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:23:10.187244 containerd[1837]: time="2024-07-02T00:23:10.187097040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:10.189705 containerd[1837]: time="2024-07-02T00:23:10.189541281Z" level=info msg="CreateContainer within sandbox \"b8ee74d5a162c3cd10432c9b21dc4ba1ed5054005c8daaabc2c1353d25a68095\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cd0ccd2d9f3562d600fa91936f2bc83ecc0287708b7c4a18ab126973dc4d664e\"" Jul 2 00:23:10.191586 containerd[1837]: time="2024-07-02T00:23:10.190810202Z" level=info msg="StartContainer for \"cd0ccd2d9f3562d600fa91936f2bc83ecc0287708b7c4a18ab126973dc4d664e\"" Jul 2 00:23:10.267051 containerd[1837]: time="2024-07-02T00:23:10.266904068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-bz5nh,Uid:1fab52b9-f219-4c41-934e-91f28b4d128c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e1882cc3ce52f4d03e652cd34b09edf3677c8abb717ae6c9fafe54340f3cf22a\"" Jul 2 00:23:10.269138 containerd[1837]: time="2024-07-02T00:23:10.269108905Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jul 2 00:23:10.282740 containerd[1837]: time="2024-07-02T00:23:10.281591513Z" level=info msg="StartContainer for \"cd0ccd2d9f3562d600fa91936f2bc83ecc0287708b7c4a18ab126973dc4d664e\" returns successfully" Jul 2 00:23:11.235461 kubelet[3384]: I0702 00:23:11.235293 3384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-kn8k7" podStartSLOduration=2.235249881 podCreationTimestamp="2024-07-02 00:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:23:11.235042178 +0000 UTC m=+14.160200787" watchObservedRunningTime="2024-07-02 00:23:11.235249881 +0000 UTC m=+14.160408490" Jul 2 00:23:14.200832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2330002194.mount: Deactivated successfully. Jul 2 00:23:14.775808 containerd[1837]: time="2024-07-02T00:23:14.775756049Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:14.778476 containerd[1837]: time="2024-07-02T00:23:14.778413588Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076052" Jul 2 00:23:14.781387 containerd[1837]: time="2024-07-02T00:23:14.781332331Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:14.785781 containerd[1837]: time="2024-07-02T00:23:14.785727296Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:14.786685 containerd[1837]: time="2024-07-02T00:23:14.786490507Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 4.517342302s" Jul 2 00:23:14.786685 containerd[1837]: time="2024-07-02T00:23:14.786529008Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jul 2 00:23:14.788685 containerd[1837]: time="2024-07-02T00:23:14.788608238Z" level=info msg="CreateContainer within sandbox \"e1882cc3ce52f4d03e652cd34b09edf3677c8abb717ae6c9fafe54340f3cf22a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 2 00:23:14.823753 containerd[1837]: time="2024-07-02T00:23:14.823717754Z" level=info msg="CreateContainer within sandbox \"e1882cc3ce52f4d03e652cd34b09edf3677c8abb717ae6c9fafe54340f3cf22a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"fbcff62a80fa0c5e7eb514817b372a80c4cb6051bf6da902ece91eada39dc100\"" Jul 2 00:23:14.825766 containerd[1837]: time="2024-07-02T00:23:14.824151061Z" level=info msg="StartContainer for \"fbcff62a80fa0c5e7eb514817b372a80c4cb6051bf6da902ece91eada39dc100\"" Jul 2 00:23:14.875175 containerd[1837]: time="2024-07-02T00:23:14.875054109Z" level=info msg="StartContainer for \"fbcff62a80fa0c5e7eb514817b372a80c4cb6051bf6da902ece91eada39dc100\" returns successfully" Jul 2 00:23:15.243481 kubelet[3384]: I0702 00:23:15.242826 3384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-bz5nh" podStartSLOduration=1.7240278930000001 podCreationTimestamp="2024-07-02 00:23:09 +0000 UTC" firstStartedPulling="2024-07-02 00:23:10.268159989 +0000 UTC m=+13.193318598" lastFinishedPulling="2024-07-02 00:23:14.786915613 +0000 UTC m=+17.712074222" observedRunningTime="2024-07-02 00:23:15.242635915 +0000 UTC m=+18.167794524" watchObservedRunningTime="2024-07-02 00:23:15.242783517 +0000 UTC m=+18.167942126" Jul 2 00:23:17.973735 kubelet[3384]: I0702 00:23:17.973688 3384 topology_manager.go:215] "Topology Admit Handler" podUID="91d0d97e-9ecc-461e-a0d2-0b4937478e67" podNamespace="calico-system" podName="calico-typha-748b467744-6zfgj" Jul 2 00:23:17.998431 kubelet[3384]: I0702 00:23:17.996515 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91d0d97e-9ecc-461e-a0d2-0b4937478e67-tigera-ca-bundle\") pod \"calico-typha-748b467744-6zfgj\" (UID: \"91d0d97e-9ecc-461e-a0d2-0b4937478e67\") " pod="calico-system/calico-typha-748b467744-6zfgj" Jul 2 00:23:17.998431 kubelet[3384]: I0702 00:23:17.998188 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2t8l\" (UniqueName: \"kubernetes.io/projected/91d0d97e-9ecc-461e-a0d2-0b4937478e67-kube-api-access-f2t8l\") pod \"calico-typha-748b467744-6zfgj\" (UID: \"91d0d97e-9ecc-461e-a0d2-0b4937478e67\") " pod="calico-system/calico-typha-748b467744-6zfgj" Jul 2 00:23:17.999178 kubelet[3384]: I0702 00:23:17.998765 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/91d0d97e-9ecc-461e-a0d2-0b4937478e67-typha-certs\") pod \"calico-typha-748b467744-6zfgj\" (UID: \"91d0d97e-9ecc-461e-a0d2-0b4937478e67\") " pod="calico-system/calico-typha-748b467744-6zfgj" Jul 2 00:23:18.075626 kubelet[3384]: I0702 00:23:18.075280 3384 topology_manager.go:215] "Topology Admit Handler" podUID="f71e83a5-0d99-415e-a715-322fc70233b8" podNamespace="calico-system" podName="calico-node-hst9w" Jul 2 00:23:18.100114 kubelet[3384]: I0702 00:23:18.099213 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-cni-log-dir\") pod \"calico-node-hst9w\" (UID: \"f71e83a5-0d99-415e-a715-322fc70233b8\") " pod="calico-system/calico-node-hst9w" Jul 2 00:23:18.100114 kubelet[3384]: I0702 00:23:18.099276 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-policysync\") pod \"calico-node-hst9w\" (UID: \"f71e83a5-0d99-415e-a715-322fc70233b8\") " pod="calico-system/calico-node-hst9w" Jul 2 00:23:18.100114 kubelet[3384]: I0702 00:23:18.099306 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-var-lib-calico\") pod \"calico-node-hst9w\" (UID: \"f71e83a5-0d99-415e-a715-322fc70233b8\") " pod="calico-system/calico-node-hst9w" Jul 2 00:23:18.100114 kubelet[3384]: I0702 00:23:18.099337 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-lib-modules\") pod \"calico-node-hst9w\" (UID: \"f71e83a5-0d99-415e-a715-322fc70233b8\") " pod="calico-system/calico-node-hst9w" Jul 2 00:23:18.100114 kubelet[3384]: I0702 00:23:18.099366 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f71e83a5-0d99-415e-a715-322fc70233b8-node-certs\") pod \"calico-node-hst9w\" (UID: \"f71e83a5-0d99-415e-a715-322fc70233b8\") " pod="calico-system/calico-node-hst9w" Jul 2 00:23:18.100464 kubelet[3384]: I0702 00:23:18.099397 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-flexvol-driver-host\") pod \"calico-node-hst9w\" (UID: \"f71e83a5-0d99-415e-a715-322fc70233b8\") " pod="calico-system/calico-node-hst9w" Jul 2 00:23:18.100464 kubelet[3384]: I0702 00:23:18.099429 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-xtables-lock\") pod \"calico-node-hst9w\" (UID: \"f71e83a5-0d99-415e-a715-322fc70233b8\") " pod="calico-system/calico-node-hst9w" Jul 2 00:23:18.100464 kubelet[3384]: I0702 00:23:18.099458 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-var-run-calico\") pod \"calico-node-hst9w\" (UID: \"f71e83a5-0d99-415e-a715-322fc70233b8\") " pod="calico-system/calico-node-hst9w" Jul 2 00:23:18.100464 kubelet[3384]: I0702 00:23:18.099486 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-cni-bin-dir\") pod \"calico-node-hst9w\" (UID: \"f71e83a5-0d99-415e-a715-322fc70233b8\") " pod="calico-system/calico-node-hst9w" Jul 2 00:23:18.100464 kubelet[3384]: I0702 00:23:18.099515 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f71e83a5-0d99-415e-a715-322fc70233b8-tigera-ca-bundle\") pod \"calico-node-hst9w\" (UID: \"f71e83a5-0d99-415e-a715-322fc70233b8\") " pod="calico-system/calico-node-hst9w" Jul 2 00:23:18.100712 kubelet[3384]: I0702 00:23:18.099540 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-cni-net-dir\") pod \"calico-node-hst9w\" (UID: \"f71e83a5-0d99-415e-a715-322fc70233b8\") " pod="calico-system/calico-node-hst9w" Jul 2 00:23:18.100712 kubelet[3384]: I0702 00:23:18.099567 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz958\" (UniqueName: \"kubernetes.io/projected/f71e83a5-0d99-415e-a715-322fc70233b8-kube-api-access-cz958\") pod \"calico-node-hst9w\" (UID: \"f71e83a5-0d99-415e-a715-322fc70233b8\") " pod="calico-system/calico-node-hst9w" Jul 2 00:23:18.187035 kubelet[3384]: I0702 00:23:18.186993 3384 topology_manager.go:215] "Topology Admit Handler" podUID="681633ee-4999-4e86-b7fb-b802b78615ed" podNamespace="calico-system" podName="csi-node-driver-586k2" Jul 2 00:23:18.189968 kubelet[3384]: E0702 00:23:18.189822 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-586k2" podUID="681633ee-4999-4e86-b7fb-b802b78615ed" Jul 2 00:23:18.201790 kubelet[3384]: I0702 00:23:18.200242 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/681633ee-4999-4e86-b7fb-b802b78615ed-varrun\") pod \"csi-node-driver-586k2\" (UID: \"681633ee-4999-4e86-b7fb-b802b78615ed\") " pod="calico-system/csi-node-driver-586k2" Jul 2 00:23:18.201790 kubelet[3384]: I0702 00:23:18.200314 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdhnv\" (UniqueName: \"kubernetes.io/projected/681633ee-4999-4e86-b7fb-b802b78615ed-kube-api-access-rdhnv\") pod \"csi-node-driver-586k2\" (UID: \"681633ee-4999-4e86-b7fb-b802b78615ed\") " pod="calico-system/csi-node-driver-586k2" Jul 2 00:23:18.201790 kubelet[3384]: I0702 00:23:18.200347 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/681633ee-4999-4e86-b7fb-b802b78615ed-socket-dir\") pod \"csi-node-driver-586k2\" (UID: \"681633ee-4999-4e86-b7fb-b802b78615ed\") " pod="calico-system/csi-node-driver-586k2" Jul 2 00:23:18.201790 kubelet[3384]: I0702 00:23:18.200453 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/681633ee-4999-4e86-b7fb-b802b78615ed-registration-dir\") pod \"csi-node-driver-586k2\" (UID: \"681633ee-4999-4e86-b7fb-b802b78615ed\") " pod="calico-system/csi-node-driver-586k2" Jul 2 00:23:18.204677 kubelet[3384]: I0702 00:23:18.202943 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/681633ee-4999-4e86-b7fb-b802b78615ed-kubelet-dir\") pod \"csi-node-driver-586k2\" (UID: \"681633ee-4999-4e86-b7fb-b802b78615ed\") " pod="calico-system/csi-node-driver-586k2" Jul 2 00:23:18.208890 kubelet[3384]: E0702 00:23:18.206161 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.209189 kubelet[3384]: W0702 00:23:18.209014 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.209189 kubelet[3384]: E0702 00:23:18.209047 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.217711 kubelet[3384]: E0702 00:23:18.214719 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.217711 kubelet[3384]: W0702 00:23:18.214737 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.217711 kubelet[3384]: E0702 00:23:18.214768 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.218223 kubelet[3384]: E0702 00:23:18.218068 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.218223 kubelet[3384]: W0702 00:23:18.218085 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.218223 kubelet[3384]: E0702 00:23:18.218112 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.219797 kubelet[3384]: E0702 00:23:18.218827 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.219797 kubelet[3384]: W0702 00:23:18.218842 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.219797 kubelet[3384]: E0702 00:23:18.218862 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.220710 kubelet[3384]: E0702 00:23:18.220693 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.220710 kubelet[3384]: W0702 00:23:18.220709 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.220824 kubelet[3384]: E0702 00:23:18.220733 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.221151 kubelet[3384]: E0702 00:23:18.221126 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.228880 kubelet[3384]: W0702 00:23:18.226723 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.228880 kubelet[3384]: E0702 00:23:18.228843 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.230109 kubelet[3384]: E0702 00:23:18.229870 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.230109 kubelet[3384]: W0702 00:23:18.229884 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.234688 kubelet[3384]: E0702 00:23:18.231295 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.234688 kubelet[3384]: E0702 00:23:18.231367 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.234688 kubelet[3384]: W0702 00:23:18.231376 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.234688 kubelet[3384]: E0702 00:23:18.231911 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.235152 kubelet[3384]: E0702 00:23:18.235135 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.235152 kubelet[3384]: W0702 00:23:18.235151 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.235384 kubelet[3384]: E0702 00:23:18.235372 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.237702 kubelet[3384]: E0702 00:23:18.235754 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.237702 kubelet[3384]: W0702 00:23:18.235768 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.237702 kubelet[3384]: E0702 00:23:18.236829 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.237702 kubelet[3384]: W0702 00:23:18.236852 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.238211 kubelet[3384]: E0702 00:23:18.238199 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.238323 kubelet[3384]: E0702 00:23:18.238307 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.238845 kubelet[3384]: E0702 00:23:18.238827 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.238928 kubelet[3384]: W0702 00:23:18.238849 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.238928 kubelet[3384]: E0702 00:23:18.238871 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.239128 kubelet[3384]: E0702 00:23:18.239114 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.239128 kubelet[3384]: W0702 00:23:18.239128 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.239227 kubelet[3384]: E0702 00:23:18.239152 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.241950 kubelet[3384]: E0702 00:23:18.241700 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.241950 kubelet[3384]: W0702 00:23:18.241721 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.241950 kubelet[3384]: E0702 00:23:18.241739 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.242119 kubelet[3384]: E0702 00:23:18.242052 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.242406 kubelet[3384]: W0702 00:23:18.242063 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.242406 kubelet[3384]: E0702 00:23:18.242305 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.243987 kubelet[3384]: E0702 00:23:18.243974 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.244209 kubelet[3384]: W0702 00:23:18.244033 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.244209 kubelet[3384]: E0702 00:23:18.244050 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.246221 kubelet[3384]: E0702 00:23:18.244437 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.246221 kubelet[3384]: W0702 00:23:18.244449 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.246221 kubelet[3384]: E0702 00:23:18.244465 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.247577 kubelet[3384]: E0702 00:23:18.247563 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.247795 kubelet[3384]: W0702 00:23:18.247780 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.247918 kubelet[3384]: E0702 00:23:18.247904 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.276051 kubelet[3384]: E0702 00:23:18.276026 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.276171 kubelet[3384]: W0702 00:23:18.276063 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.276171 kubelet[3384]: E0702 00:23:18.276087 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.289728 containerd[1837]: time="2024-07-02T00:23:18.288585105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-748b467744-6zfgj,Uid:91d0d97e-9ecc-461e-a0d2-0b4937478e67,Namespace:calico-system,Attempt:0,}" Jul 2 00:23:18.321856 kubelet[3384]: E0702 00:23:18.321810 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.321856 kubelet[3384]: W0702 00:23:18.321852 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.322089 kubelet[3384]: E0702 00:23:18.321883 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.322818 kubelet[3384]: E0702 00:23:18.322795 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.322818 kubelet[3384]: W0702 00:23:18.322816 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.324731 kubelet[3384]: E0702 00:23:18.324711 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.325843 kubelet[3384]: E0702 00:23:18.325746 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.325843 kubelet[3384]: W0702 00:23:18.325761 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.325843 kubelet[3384]: E0702 00:23:18.325794 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.326760 kubelet[3384]: E0702 00:23:18.326212 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.326760 kubelet[3384]: W0702 00:23:18.326226 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.326760 kubelet[3384]: E0702 00:23:18.326689 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.327923 kubelet[3384]: E0702 00:23:18.327799 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.327923 kubelet[3384]: W0702 00:23:18.327814 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.327923 kubelet[3384]: E0702 00:23:18.327869 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.329826 kubelet[3384]: E0702 00:23:18.328346 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.329826 kubelet[3384]: W0702 00:23:18.328360 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.329826 kubelet[3384]: E0702 00:23:18.329698 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.331965 kubelet[3384]: E0702 00:23:18.331772 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.331965 kubelet[3384]: W0702 00:23:18.331787 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.331965 kubelet[3384]: E0702 00:23:18.331921 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.332269 kubelet[3384]: E0702 00:23:18.332174 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.332269 kubelet[3384]: W0702 00:23:18.332184 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.332536 kubelet[3384]: E0702 00:23:18.332465 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.333865 kubelet[3384]: E0702 00:23:18.333790 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.333865 kubelet[3384]: W0702 00:23:18.333804 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.335732 kubelet[3384]: E0702 00:23:18.333953 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.336938 kubelet[3384]: E0702 00:23:18.336798 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.336938 kubelet[3384]: W0702 00:23:18.336813 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.336938 kubelet[3384]: E0702 00:23:18.336873 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.337559 kubelet[3384]: E0702 00:23:18.337318 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.337559 kubelet[3384]: W0702 00:23:18.337339 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.337559 kubelet[3384]: E0702 00:23:18.337427 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.338797 kubelet[3384]: E0702 00:23:18.338567 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.338797 kubelet[3384]: W0702 00:23:18.338579 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.341342 kubelet[3384]: E0702 00:23:18.340727 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.341597 kubelet[3384]: E0702 00:23:18.341587 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.341721 kubelet[3384]: W0702 00:23:18.341697 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.341896 kubelet[3384]: E0702 00:23:18.341827 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.342145 kubelet[3384]: E0702 00:23:18.342122 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.342275 kubelet[3384]: W0702 00:23:18.342221 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.342416 kubelet[3384]: E0702 00:23:18.342310 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.343455 kubelet[3384]: E0702 00:23:18.342864 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.343455 kubelet[3384]: W0702 00:23:18.342877 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.348683 kubelet[3384]: E0702 00:23:18.347289 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.348683 kubelet[3384]: W0702 00:23:18.347303 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.348683 kubelet[3384]: E0702 00:23:18.347481 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.348683 kubelet[3384]: W0702 00:23:18.347491 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.348683 kubelet[3384]: E0702 00:23:18.347646 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.348683 kubelet[3384]: W0702 00:23:18.347654 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.348683 kubelet[3384]: E0702 00:23:18.347695 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.351757 kubelet[3384]: E0702 00:23:18.350841 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.351757 kubelet[3384]: W0702 00:23:18.350854 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.351757 kubelet[3384]: E0702 00:23:18.350872 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.351757 kubelet[3384]: E0702 00:23:18.350900 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.355945 kubelet[3384]: E0702 00:23:18.355798 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.355945 kubelet[3384]: W0702 00:23:18.355813 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.355945 kubelet[3384]: E0702 00:23:18.355830 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.355945 kubelet[3384]: E0702 00:23:18.355856 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.356306 kubelet[3384]: E0702 00:23:18.356195 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.356306 kubelet[3384]: W0702 00:23:18.356208 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.356306 kubelet[3384]: E0702 00:23:18.356226 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.356508 kubelet[3384]: E0702 00:23:18.356498 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.356703 kubelet[3384]: W0702 00:23:18.356689 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.359505 kubelet[3384]: E0702 00:23:18.358908 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.359925 kubelet[3384]: E0702 00:23:18.359912 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.363740 kubelet[3384]: E0702 00:23:18.363725 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.363853 kubelet[3384]: W0702 00:23:18.363828 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.364044 kubelet[3384]: E0702 00:23:18.364033 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.364232 kubelet[3384]: E0702 00:23:18.364220 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.364677 kubelet[3384]: W0702 00:23:18.364298 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.364677 kubelet[3384]: E0702 00:23:18.364317 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.366066 kubelet[3384]: E0702 00:23:18.365809 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.366066 kubelet[3384]: W0702 00:23:18.365822 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.366066 kubelet[3384]: E0702 00:23:18.365838 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.383258 containerd[1837]: time="2024-07-02T00:23:18.382700389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hst9w,Uid:f71e83a5-0d99-415e-a715-322fc70233b8,Namespace:calico-system,Attempt:0,}" Jul 2 00:23:18.393634 containerd[1837]: time="2024-07-02T00:23:18.393552448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:23:18.393773 containerd[1837]: time="2024-07-02T00:23:18.393620749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:18.393773 containerd[1837]: time="2024-07-02T00:23:18.393645050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:23:18.393773 containerd[1837]: time="2024-07-02T00:23:18.393678550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:18.398682 kubelet[3384]: E0702 00:23:18.398306 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:18.400685 kubelet[3384]: W0702 00:23:18.398994 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:18.400685 kubelet[3384]: E0702 00:23:18.399028 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:18.471268 containerd[1837]: time="2024-07-02T00:23:18.471227791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-748b467744-6zfgj,Uid:91d0d97e-9ecc-461e-a0d2-0b4937478e67,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ee5add260dca732f3132f1c842e3189b7eb98ebde4485285e902568e61b948a\"" Jul 2 00:23:18.473054 containerd[1837]: time="2024-07-02T00:23:18.472963516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jul 2 00:23:19.509737 containerd[1837]: time="2024-07-02T00:23:19.509468758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:23:19.509737 containerd[1837]: time="2024-07-02T00:23:19.509526459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:19.509737 containerd[1837]: time="2024-07-02T00:23:19.509551559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:23:19.509737 containerd[1837]: time="2024-07-02T00:23:19.509570359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:19.570438 containerd[1837]: time="2024-07-02T00:23:19.570391854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hst9w,Uid:f71e83a5-0d99-415e-a715-322fc70233b8,Namespace:calico-system,Attempt:0,} returns sandbox id \"17d0933b39accf036ce72cbb8d718c0b2d7798bd9f3cda8af56fe209f71a9be4\"" Jul 2 00:23:20.164569 kubelet[3384]: E0702 00:23:20.164511 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-586k2" podUID="681633ee-4999-4e86-b7fb-b802b78615ed" Jul 2 00:23:21.882307 containerd[1837]: time="2024-07-02T00:23:21.882178815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:21.885517 containerd[1837]: time="2024-07-02T00:23:21.885446069Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jul 2 00:23:21.890568 containerd[1837]: time="2024-07-02T00:23:21.890535553Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:21.898847 containerd[1837]: time="2024-07-02T00:23:21.898794090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:21.899685 containerd[1837]: time="2024-07-02T00:23:21.899634904Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 3.426582087s" Jul 2 00:23:21.900742 containerd[1837]: time="2024-07-02T00:23:21.900701221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jul 2 00:23:21.904828 containerd[1837]: time="2024-07-02T00:23:21.904803289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 00:23:21.929692 containerd[1837]: time="2024-07-02T00:23:21.926043540Z" level=info msg="CreateContainer within sandbox \"4ee5add260dca732f3132f1c842e3189b7eb98ebde4485285e902568e61b948a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 00:23:21.986082 containerd[1837]: time="2024-07-02T00:23:21.985932830Z" level=info msg="CreateContainer within sandbox \"4ee5add260dca732f3132f1c842e3189b7eb98ebde4485285e902568e61b948a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"cd33c3b9415d61a6ee3924bb342d51b8990c8cf477f32198e928cfa51df274bd\"" Jul 2 00:23:21.988896 containerd[1837]: time="2024-07-02T00:23:21.988853478Z" level=info msg="StartContainer for \"cd33c3b9415d61a6ee3924bb342d51b8990c8cf477f32198e928cfa51df274bd\"" Jul 2 00:23:22.159521 containerd[1837]: time="2024-07-02T00:23:22.159312095Z" level=info msg="StartContainer for \"cd33c3b9415d61a6ee3924bb342d51b8990c8cf477f32198e928cfa51df274bd\" returns successfully" Jul 2 00:23:22.163540 kubelet[3384]: E0702 00:23:22.163501 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-586k2" podUID="681633ee-4999-4e86-b7fb-b802b78615ed" Jul 2 00:23:22.260683 containerd[1837]: time="2024-07-02T00:23:22.260629755Z" level=info msg="StopContainer for \"cd33c3b9415d61a6ee3924bb342d51b8990c8cf477f32198e928cfa51df274bd\" with timeout 300 (s)" Jul 2 00:23:22.261476 containerd[1837]: time="2024-07-02T00:23:22.261292665Z" level=info msg="Stop container \"cd33c3b9415d61a6ee3924bb342d51b8990c8cf477f32198e928cfa51df274bd\" with signal terminated" Jul 2 00:23:22.912737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd33c3b9415d61a6ee3924bb342d51b8990c8cf477f32198e928cfa51df274bd-rootfs.mount: Deactivated successfully. Jul 2 00:23:23.474899 containerd[1837]: time="2024-07-02T00:23:23.474816838Z" level=info msg="shim disconnected" id=cd33c3b9415d61a6ee3924bb342d51b8990c8cf477f32198e928cfa51df274bd namespace=k8s.io Jul 2 00:23:23.476859 containerd[1837]: time="2024-07-02T00:23:23.475453548Z" level=warning msg="cleaning up after shim disconnected" id=cd33c3b9415d61a6ee3924bb342d51b8990c8cf477f32198e928cfa51df274bd namespace=k8s.io Jul 2 00:23:23.476859 containerd[1837]: time="2024-07-02T00:23:23.475480348Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:23:23.505587 containerd[1837]: time="2024-07-02T00:23:23.505535111Z" level=info msg="StopContainer for \"cd33c3b9415d61a6ee3924bb342d51b8990c8cf477f32198e928cfa51df274bd\" returns successfully" Jul 2 00:23:23.506632 containerd[1837]: time="2024-07-02T00:23:23.506602027Z" level=info msg="StopPodSandbox for \"4ee5add260dca732f3132f1c842e3189b7eb98ebde4485285e902568e61b948a\"" Jul 2 00:23:23.506996 containerd[1837]: time="2024-07-02T00:23:23.506883232Z" level=info msg="Container to stop \"cd33c3b9415d61a6ee3924bb342d51b8990c8cf477f32198e928cfa51df274bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:23:23.511185 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4ee5add260dca732f3132f1c842e3189b7eb98ebde4485285e902568e61b948a-shm.mount: Deactivated successfully. Jul 2 00:23:23.561122 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ee5add260dca732f3132f1c842e3189b7eb98ebde4485285e902568e61b948a-rootfs.mount: Deactivated successfully. Jul 2 00:23:23.568008 containerd[1837]: time="2024-07-02T00:23:23.567792469Z" level=info msg="shim disconnected" id=4ee5add260dca732f3132f1c842e3189b7eb98ebde4485285e902568e61b948a namespace=k8s.io Jul 2 00:23:23.568008 containerd[1837]: time="2024-07-02T00:23:23.567958071Z" level=warning msg="cleaning up after shim disconnected" id=4ee5add260dca732f3132f1c842e3189b7eb98ebde4485285e902568e61b948a namespace=k8s.io Jul 2 00:23:23.568008 containerd[1837]: time="2024-07-02T00:23:23.567976772Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:23:23.582381 containerd[1837]: time="2024-07-02T00:23:23.582337693Z" level=info msg="TearDown network for sandbox \"4ee5add260dca732f3132f1c842e3189b7eb98ebde4485285e902568e61b948a\" successfully" Jul 2 00:23:23.582381 containerd[1837]: time="2024-07-02T00:23:23.582371493Z" level=info msg="StopPodSandbox for \"4ee5add260dca732f3132f1c842e3189b7eb98ebde4485285e902568e61b948a\" returns successfully" Jul 2 00:23:23.601846 kubelet[3384]: I0702 00:23:23.600639 3384 topology_manager.go:215] "Topology Admit Handler" podUID="5ebf7145-f3d0-44af-8a38-97a4039fff2e" podNamespace="calico-system" podName="calico-typha-54f58ccdfc-nj4qn" Jul 2 00:23:23.601846 kubelet[3384]: E0702 00:23:23.600732 3384 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="91d0d97e-9ecc-461e-a0d2-0b4937478e67" containerName="calico-typha" Jul 2 00:23:23.601846 kubelet[3384]: I0702 00:23:23.600765 3384 memory_manager.go:346] "RemoveStaleState removing state" podUID="91d0d97e-9ecc-461e-a0d2-0b4937478e67" containerName="calico-typha" Jul 2 00:23:23.638905 kubelet[3384]: E0702 00:23:23.638873 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.638905 kubelet[3384]: W0702 00:23:23.638901 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.639857 kubelet[3384]: E0702 00:23:23.638933 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.639857 kubelet[3384]: E0702 00:23:23.639144 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.639857 kubelet[3384]: W0702 00:23:23.639156 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.639857 kubelet[3384]: E0702 00:23:23.639173 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.639857 kubelet[3384]: E0702 00:23:23.639359 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.639857 kubelet[3384]: W0702 00:23:23.639371 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.639857 kubelet[3384]: E0702 00:23:23.639387 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.639857 kubelet[3384]: E0702 00:23:23.639566 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.639857 kubelet[3384]: W0702 00:23:23.639579 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.639857 kubelet[3384]: E0702 00:23:23.639595 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.641367 kubelet[3384]: E0702 00:23:23.639921 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.641367 kubelet[3384]: W0702 00:23:23.639932 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.641367 kubelet[3384]: E0702 00:23:23.639948 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.641367 kubelet[3384]: E0702 00:23:23.640130 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.641367 kubelet[3384]: W0702 00:23:23.640139 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.641367 kubelet[3384]: E0702 00:23:23.640153 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.641367 kubelet[3384]: E0702 00:23:23.640336 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.641367 kubelet[3384]: W0702 00:23:23.640348 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.641367 kubelet[3384]: E0702 00:23:23.640364 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.641367 kubelet[3384]: E0702 00:23:23.640544 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.641770 kubelet[3384]: W0702 00:23:23.640556 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.641770 kubelet[3384]: E0702 00:23:23.640571 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.641770 kubelet[3384]: E0702 00:23:23.640765 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.641770 kubelet[3384]: W0702 00:23:23.640777 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.641770 kubelet[3384]: E0702 00:23:23.640794 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.641770 kubelet[3384]: E0702 00:23:23.640966 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.641770 kubelet[3384]: W0702 00:23:23.640976 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.641770 kubelet[3384]: E0702 00:23:23.640990 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.641770 kubelet[3384]: E0702 00:23:23.641201 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.641770 kubelet[3384]: W0702 00:23:23.641214 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.642152 kubelet[3384]: E0702 00:23:23.641231 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.642152 kubelet[3384]: E0702 00:23:23.641420 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.642152 kubelet[3384]: W0702 00:23:23.641430 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.642152 kubelet[3384]: E0702 00:23:23.641445 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.681370 kubelet[3384]: E0702 00:23:23.681266 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.681370 kubelet[3384]: W0702 00:23:23.681294 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.681370 kubelet[3384]: E0702 00:23:23.681324 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.681978 kubelet[3384]: I0702 00:23:23.681735 3384 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2t8l\" (UniqueName: \"kubernetes.io/projected/91d0d97e-9ecc-461e-a0d2-0b4937478e67-kube-api-access-f2t8l\") pod \"91d0d97e-9ecc-461e-a0d2-0b4937478e67\" (UID: \"91d0d97e-9ecc-461e-a0d2-0b4937478e67\") " Jul 2 00:23:23.682360 kubelet[3384]: E0702 00:23:23.682191 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.682360 kubelet[3384]: W0702 00:23:23.682209 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.682360 kubelet[3384]: E0702 00:23:23.682248 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.682781 kubelet[3384]: I0702 00:23:23.682588 3384 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/91d0d97e-9ecc-461e-a0d2-0b4937478e67-typha-certs\") pod \"91d0d97e-9ecc-461e-a0d2-0b4937478e67\" (UID: \"91d0d97e-9ecc-461e-a0d2-0b4937478e67\") " Jul 2 00:23:23.683059 kubelet[3384]: E0702 00:23:23.682944 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.683059 kubelet[3384]: W0702 00:23:23.682967 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.683059 kubelet[3384]: E0702 00:23:23.683000 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.683598 kubelet[3384]: E0702 00:23:23.683411 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.683598 kubelet[3384]: W0702 00:23:23.683436 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.683598 kubelet[3384]: E0702 00:23:23.683456 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.683598 kubelet[3384]: I0702 00:23:23.683486 3384 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91d0d97e-9ecc-461e-a0d2-0b4937478e67-tigera-ca-bundle\") pod \"91d0d97e-9ecc-461e-a0d2-0b4937478e67\" (UID: \"91d0d97e-9ecc-461e-a0d2-0b4937478e67\") " Jul 2 00:23:23.684105 kubelet[3384]: E0702 00:23:23.683941 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.684105 kubelet[3384]: W0702 00:23:23.683956 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.684105 kubelet[3384]: E0702 00:23:23.683974 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.684105 kubelet[3384]: I0702 00:23:23.684005 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ebf7145-f3d0-44af-8a38-97a4039fff2e-tigera-ca-bundle\") pod \"calico-typha-54f58ccdfc-nj4qn\" (UID: \"5ebf7145-f3d0-44af-8a38-97a4039fff2e\") " pod="calico-system/calico-typha-54f58ccdfc-nj4qn" Jul 2 00:23:23.684680 kubelet[3384]: E0702 00:23:23.684618 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.684680 kubelet[3384]: W0702 00:23:23.684634 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.685276 kubelet[3384]: E0702 00:23:23.684994 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.685276 kubelet[3384]: W0702 00:23:23.685008 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.685276 kubelet[3384]: E0702 00:23:23.685026 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.686644 kubelet[3384]: E0702 00:23:23.686629 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.686773 kubelet[3384]: W0702 00:23:23.686758 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.686867 kubelet[3384]: E0702 00:23:23.686855 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.687111 kubelet[3384]: E0702 00:23:23.687058 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.687111 kubelet[3384]: I0702 00:23:23.687093 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5ebf7145-f3d0-44af-8a38-97a4039fff2e-typha-certs\") pod \"calico-typha-54f58ccdfc-nj4qn\" (UID: \"5ebf7145-f3d0-44af-8a38-97a4039fff2e\") " pod="calico-system/calico-typha-54f58ccdfc-nj4qn" Jul 2 00:23:23.687560 kubelet[3384]: E0702 00:23:23.687362 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.687560 kubelet[3384]: W0702 00:23:23.687376 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.687560 kubelet[3384]: E0702 00:23:23.687395 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.688972 kubelet[3384]: E0702 00:23:23.688958 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.689332 kubelet[3384]: W0702 00:23:23.689061 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.689332 kubelet[3384]: E0702 00:23:23.689101 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.693007 kubelet[3384]: E0702 00:23:23.692915 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.693007 kubelet[3384]: W0702 00:23:23.692930 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.693537 kubelet[3384]: E0702 00:23:23.693268 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.693537 kubelet[3384]: W0702 00:23:23.693281 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.693537 kubelet[3384]: E0702 00:23:23.693298 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.693537 kubelet[3384]: I0702 00:23:23.693329 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxvjz\" (UniqueName: \"kubernetes.io/projected/5ebf7145-f3d0-44af-8a38-97a4039fff2e-kube-api-access-wxvjz\") pod \"calico-typha-54f58ccdfc-nj4qn\" (UID: \"5ebf7145-f3d0-44af-8a38-97a4039fff2e\") " pod="calico-system/calico-typha-54f58ccdfc-nj4qn" Jul 2 00:23:23.693537 kubelet[3384]: I0702 00:23:23.693408 3384 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91d0d97e-9ecc-461e-a0d2-0b4937478e67-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "91d0d97e-9ecc-461e-a0d2-0b4937478e67" (UID: "91d0d97e-9ecc-461e-a0d2-0b4937478e67"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:23:23.693537 kubelet[3384]: E0702 00:23:23.693428 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.693836 kubelet[3384]: I0702 00:23:23.693767 3384 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91d0d97e-9ecc-461e-a0d2-0b4937478e67-kube-api-access-f2t8l" (OuterVolumeSpecName: "kube-api-access-f2t8l") pod "91d0d97e-9ecc-461e-a0d2-0b4937478e67" (UID: "91d0d97e-9ecc-461e-a0d2-0b4937478e67"). InnerVolumeSpecName "kube-api-access-f2t8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:23:23.694778 kubelet[3384]: I0702 00:23:23.693980 3384 reconciler_common.go:300] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/91d0d97e-9ecc-461e-a0d2-0b4937478e67-typha-certs\") on node \"ci-3975.1.1-a-106c6d4ee2\" DevicePath \"\"" Jul 2 00:23:23.695014 kubelet[3384]: E0702 00:23:23.694905 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.695014 kubelet[3384]: W0702 00:23:23.694919 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.695014 kubelet[3384]: E0702 00:23:23.694942 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.695526 systemd[1]: var-lib-kubelet-pods-91d0d97e\x2d9ecc\x2d461e\x2da0d2\x2d0b4937478e67-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df2t8l.mount: Deactivated successfully. Jul 2 00:23:23.696245 kubelet[3384]: E0702 00:23:23.695777 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.696245 kubelet[3384]: W0702 00:23:23.695788 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.699635 kubelet[3384]: E0702 00:23:23.698564 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.699635 kubelet[3384]: W0702 00:23:23.698579 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.699635 kubelet[3384]: E0702 00:23:23.698596 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.699635 kubelet[3384]: I0702 00:23:23.698969 3384 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91d0d97e-9ecc-461e-a0d2-0b4937478e67-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "91d0d97e-9ecc-461e-a0d2-0b4937478e67" (UID: "91d0d97e-9ecc-461e-a0d2-0b4937478e67"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:23:23.699635 kubelet[3384]: E0702 00:23:23.698995 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.699970 systemd[1]: var-lib-kubelet-pods-91d0d97e\x2d9ecc\x2d461e\x2da0d2\x2d0b4937478e67-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jul 2 00:23:23.700159 systemd[1]: var-lib-kubelet-pods-91d0d97e\x2d9ecc\x2d461e\x2da0d2\x2d0b4937478e67-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jul 2 00:23:23.794950 kubelet[3384]: E0702 00:23:23.794836 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.794950 kubelet[3384]: W0702 00:23:23.794857 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.794950 kubelet[3384]: E0702 00:23:23.794884 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.795422 kubelet[3384]: E0702 00:23:23.795380 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.795422 kubelet[3384]: W0702 00:23:23.795397 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.795422 kubelet[3384]: E0702 00:23:23.795424 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.797537 kubelet[3384]: E0702 00:23:23.795851 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.797537 kubelet[3384]: W0702 00:23:23.795863 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.797537 kubelet[3384]: E0702 00:23:23.795881 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.797537 kubelet[3384]: I0702 00:23:23.795941 3384 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-f2t8l\" (UniqueName: \"kubernetes.io/projected/91d0d97e-9ecc-461e-a0d2-0b4937478e67-kube-api-access-f2t8l\") on node \"ci-3975.1.1-a-106c6d4ee2\" DevicePath \"\"" Jul 2 00:23:23.797537 kubelet[3384]: I0702 00:23:23.795958 3384 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91d0d97e-9ecc-461e-a0d2-0b4937478e67-tigera-ca-bundle\") on node \"ci-3975.1.1-a-106c6d4ee2\" DevicePath \"\"" Jul 2 00:23:23.797537 kubelet[3384]: E0702 00:23:23.796153 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.797537 kubelet[3384]: W0702 00:23:23.796163 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.797537 kubelet[3384]: E0702 00:23:23.796177 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.797537 kubelet[3384]: E0702 00:23:23.796339 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.797537 kubelet[3384]: W0702 00:23:23.796347 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.798111 kubelet[3384]: E0702 00:23:23.796363 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.798111 kubelet[3384]: E0702 00:23:23.796503 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.798111 kubelet[3384]: W0702 00:23:23.796510 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.798111 kubelet[3384]: E0702 00:23:23.796526 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.798111 kubelet[3384]: E0702 00:23:23.796731 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.798111 kubelet[3384]: W0702 00:23:23.796741 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.798111 kubelet[3384]: E0702 00:23:23.796756 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.798111 kubelet[3384]: E0702 00:23:23.797812 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.798111 kubelet[3384]: W0702 00:23:23.797825 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.798111 kubelet[3384]: E0702 00:23:23.797850 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.798624 kubelet[3384]: E0702 00:23:23.798089 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.798624 kubelet[3384]: W0702 00:23:23.798101 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.798624 kubelet[3384]: E0702 00:23:23.798131 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.798624 kubelet[3384]: E0702 00:23:23.798332 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.798624 kubelet[3384]: W0702 00:23:23.798343 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.798624 kubelet[3384]: E0702 00:23:23.798365 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.798624 kubelet[3384]: E0702 00:23:23.798578 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.798624 kubelet[3384]: W0702 00:23:23.798589 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.798624 kubelet[3384]: E0702 00:23:23.798610 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.799742 kubelet[3384]: E0702 00:23:23.799408 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.799742 kubelet[3384]: W0702 00:23:23.799422 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.799742 kubelet[3384]: E0702 00:23:23.799440 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.800052 kubelet[3384]: E0702 00:23:23.799828 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.800052 kubelet[3384]: W0702 00:23:23.799838 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.800052 kubelet[3384]: E0702 00:23:23.799856 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.800634 kubelet[3384]: E0702 00:23:23.800493 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.800634 kubelet[3384]: W0702 00:23:23.800505 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.800634 kubelet[3384]: E0702 00:23:23.800553 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.801098 kubelet[3384]: E0702 00:23:23.800945 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.801098 kubelet[3384]: W0702 00:23:23.800958 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.801098 kubelet[3384]: E0702 00:23:23.800988 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.801414 kubelet[3384]: E0702 00:23:23.801401 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.801701 kubelet[3384]: W0702 00:23:23.801488 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.801701 kubelet[3384]: E0702 00:23:23.801511 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.805925 kubelet[3384]: E0702 00:23:23.805905 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.806045 kubelet[3384]: W0702 00:23:23.806034 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.806123 kubelet[3384]: E0702 00:23:23.806116 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.811052 kubelet[3384]: E0702 00:23:23.811032 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:23.811052 kubelet[3384]: W0702 00:23:23.811049 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:23.811162 kubelet[3384]: E0702 00:23:23.811067 3384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:23.907559 containerd[1837]: time="2024-07-02T00:23:23.907517396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54f58ccdfc-nj4qn,Uid:5ebf7145-f3d0-44af-8a38-97a4039fff2e,Namespace:calico-system,Attempt:0,}" Jul 2 00:23:23.976192 containerd[1837]: time="2024-07-02T00:23:23.974796632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:23:23.976192 containerd[1837]: time="2024-07-02T00:23:23.974858533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:23.976192 containerd[1837]: time="2024-07-02T00:23:23.974890733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:23:23.976192 containerd[1837]: time="2024-07-02T00:23:23.974909733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:24.006235 systemd[1]: run-containerd-runc-k8s.io-43afd0a8e1ff821ffd2b4413a17ce00a8b18a32b66404db2bef2bc64275cfbbc-runc.QdH63u.mount: Deactivated successfully. Jul 2 00:23:24.050066 containerd[1837]: time="2024-07-02T00:23:24.049708784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54f58ccdfc-nj4qn,Uid:5ebf7145-f3d0-44af-8a38-97a4039fff2e,Namespace:calico-system,Attempt:0,} returns sandbox id \"43afd0a8e1ff821ffd2b4413a17ce00a8b18a32b66404db2bef2bc64275cfbbc\"" Jul 2 00:23:24.060285 containerd[1837]: time="2024-07-02T00:23:24.060244947Z" level=info msg="CreateContainer within sandbox \"43afd0a8e1ff821ffd2b4413a17ce00a8b18a32b66404db2bef2bc64275cfbbc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 00:23:24.134790 containerd[1837]: time="2024-07-02T00:23:24.134735393Z" level=info msg="CreateContainer within sandbox \"43afd0a8e1ff821ffd2b4413a17ce00a8b18a32b66404db2bef2bc64275cfbbc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a8c8aa052144abacda6d61ac114858375e8867e8edccad569b2dcb19909fb087\"" Jul 2 00:23:24.135406 containerd[1837]: time="2024-07-02T00:23:24.135363902Z" level=info msg="StartContainer for \"a8c8aa052144abacda6d61ac114858375e8867e8edccad569b2dcb19909fb087\"" Jul 2 00:23:24.164424 kubelet[3384]: E0702 00:23:24.163973 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-586k2" podUID="681633ee-4999-4e86-b7fb-b802b78615ed" Jul 2 00:23:24.268256 kubelet[3384]: I0702 00:23:24.268220 3384 scope.go:117] "RemoveContainer" containerID="cd33c3b9415d61a6ee3924bb342d51b8990c8cf477f32198e928cfa51df274bd" Jul 2 00:23:24.284763 containerd[1837]: time="2024-07-02T00:23:24.282183762Z" level=info msg="RemoveContainer for \"cd33c3b9415d61a6ee3924bb342d51b8990c8cf477f32198e928cfa51df274bd\"" Jul 2 00:23:24.304148 containerd[1837]: time="2024-07-02T00:23:24.303016182Z" level=info msg="RemoveContainer for \"cd33c3b9415d61a6ee3924bb342d51b8990c8cf477f32198e928cfa51df274bd\" returns successfully" Jul 2 00:23:24.306145 kubelet[3384]: I0702 00:23:24.306111 3384 scope.go:117] "RemoveContainer" containerID="cd33c3b9415d61a6ee3924bb342d51b8990c8cf477f32198e928cfa51df274bd" Jul 2 00:23:24.307109 containerd[1837]: time="2024-07-02T00:23:24.306858541Z" level=error msg="ContainerStatus for \"cd33c3b9415d61a6ee3924bb342d51b8990c8cf477f32198e928cfa51df274bd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cd33c3b9415d61a6ee3924bb342d51b8990c8cf477f32198e928cfa51df274bd\": not found" Jul 2 00:23:24.308520 kubelet[3384]: E0702 00:23:24.308478 3384 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cd33c3b9415d61a6ee3924bb342d51b8990c8cf477f32198e928cfa51df274bd\": not found" containerID="cd33c3b9415d61a6ee3924bb342d51b8990c8cf477f32198e928cfa51df274bd" Jul 2 00:23:24.308814 kubelet[3384]: I0702 00:23:24.308637 3384 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cd33c3b9415d61a6ee3924bb342d51b8990c8cf477f32198e928cfa51df274bd"} err="failed to get container status \"cd33c3b9415d61a6ee3924bb342d51b8990c8cf477f32198e928cfa51df274bd\": rpc error: code = NotFound desc = an error occurred when try to find container \"cd33c3b9415d61a6ee3924bb342d51b8990c8cf477f32198e928cfa51df274bd\": not found" Jul 2 00:23:24.327954 containerd[1837]: time="2024-07-02T00:23:24.327700862Z" level=info msg="StartContainer for \"a8c8aa052144abacda6d61ac114858375e8867e8edccad569b2dcb19909fb087\" returns successfully" Jul 2 00:23:24.715572 containerd[1837]: time="2024-07-02T00:23:24.715515729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:24.721087 containerd[1837]: time="2024-07-02T00:23:24.721022814Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jul 2 00:23:24.725693 containerd[1837]: time="2024-07-02T00:23:24.725052476Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:24.731789 containerd[1837]: time="2024-07-02T00:23:24.731750579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:24.732595 containerd[1837]: time="2024-07-02T00:23:24.732479591Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 2.8276251s" Jul 2 00:23:24.732595 containerd[1837]: time="2024-07-02T00:23:24.732519991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jul 2 00:23:24.734998 containerd[1837]: time="2024-07-02T00:23:24.734348119Z" level=info msg="CreateContainer within sandbox \"17d0933b39accf036ce72cbb8d718c0b2d7798bd9f3cda8af56fe209f71a9be4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 00:23:24.773689 containerd[1837]: time="2024-07-02T00:23:24.773630424Z" level=info msg="CreateContainer within sandbox \"17d0933b39accf036ce72cbb8d718c0b2d7798bd9f3cda8af56fe209f71a9be4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2596fe4a9fc06ae46c4aae58e0d12786a3713a3fcba7a5432667a4fa294112f1\"" Jul 2 00:23:24.775117 containerd[1837]: time="2024-07-02T00:23:24.774189032Z" level=info msg="StartContainer for \"2596fe4a9fc06ae46c4aae58e0d12786a3713a3fcba7a5432667a4fa294112f1\"" Jul 2 00:23:24.826920 containerd[1837]: time="2024-07-02T00:23:24.826871343Z" level=info msg="StartContainer for \"2596fe4a9fc06ae46c4aae58e0d12786a3713a3fcba7a5432667a4fa294112f1\" returns successfully" Jul 2 00:23:24.985756 containerd[1837]: time="2024-07-02T00:23:24.985542685Z" level=info msg="shim disconnected" id=2596fe4a9fc06ae46c4aae58e0d12786a3713a3fcba7a5432667a4fa294112f1 namespace=k8s.io Jul 2 00:23:24.985756 containerd[1837]: time="2024-07-02T00:23:24.985615786Z" level=warning msg="cleaning up after shim disconnected" id=2596fe4a9fc06ae46c4aae58e0d12786a3713a3fcba7a5432667a4fa294112f1 namespace=k8s.io Jul 2 00:23:24.985756 containerd[1837]: time="2024-07-02T00:23:24.985629386Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:23:25.166779 kubelet[3384]: I0702 00:23:25.166741 3384 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="91d0d97e-9ecc-461e-a0d2-0b4937478e67" path="/var/lib/kubelet/pods/91d0d97e-9ecc-461e-a0d2-0b4937478e67/volumes" Jul 2 00:23:25.296754 containerd[1837]: time="2024-07-02T00:23:25.292768612Z" level=info msg="StopPodSandbox for \"17d0933b39accf036ce72cbb8d718c0b2d7798bd9f3cda8af56fe209f71a9be4\"" Jul 2 00:23:25.296754 containerd[1837]: time="2024-07-02T00:23:25.292819013Z" level=info msg="Container to stop \"2596fe4a9fc06ae46c4aae58e0d12786a3713a3fcba7a5432667a4fa294112f1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:23:25.302574 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-17d0933b39accf036ce72cbb8d718c0b2d7798bd9f3cda8af56fe209f71a9be4-shm.mount: Deactivated successfully. Jul 2 00:23:25.313314 kubelet[3384]: I0702 00:23:25.310088 3384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-54f58ccdfc-nj4qn" podStartSLOduration=7.310041578 podCreationTimestamp="2024-07-02 00:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:23:25.309582071 +0000 UTC m=+28.234740680" watchObservedRunningTime="2024-07-02 00:23:25.310041578 +0000 UTC m=+28.235200287" Jul 2 00:23:25.359023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17d0933b39accf036ce72cbb8d718c0b2d7798bd9f3cda8af56fe209f71a9be4-rootfs.mount: Deactivated successfully. Jul 2 00:23:25.365521 containerd[1837]: time="2024-07-02T00:23:25.365349929Z" level=info msg="shim disconnected" id=17d0933b39accf036ce72cbb8d718c0b2d7798bd9f3cda8af56fe209f71a9be4 namespace=k8s.io Jul 2 00:23:25.365778 containerd[1837]: time="2024-07-02T00:23:25.365566032Z" level=warning msg="cleaning up after shim disconnected" id=17d0933b39accf036ce72cbb8d718c0b2d7798bd9f3cda8af56fe209f71a9be4 namespace=k8s.io Jul 2 00:23:25.365778 containerd[1837]: time="2024-07-02T00:23:25.365582432Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:23:25.379513 containerd[1837]: time="2024-07-02T00:23:25.379466546Z" level=info msg="TearDown network for sandbox \"17d0933b39accf036ce72cbb8d718c0b2d7798bd9f3cda8af56fe209f71a9be4\" successfully" Jul 2 00:23:25.379513 containerd[1837]: time="2024-07-02T00:23:25.379503647Z" level=info msg="StopPodSandbox for \"17d0933b39accf036ce72cbb8d718c0b2d7798bd9f3cda8af56fe209f71a9be4\" returns successfully" Jul 2 00:23:25.510034 kubelet[3384]: I0702 00:23:25.509993 3384 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz958\" (UniqueName: \"kubernetes.io/projected/f71e83a5-0d99-415e-a715-322fc70233b8-kube-api-access-cz958\") pod \"f71e83a5-0d99-415e-a715-322fc70233b8\" (UID: \"f71e83a5-0d99-415e-a715-322fc70233b8\") " Jul 2 00:23:25.510408 kubelet[3384]: I0702 00:23:25.510056 3384 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-var-run-calico\") pod \"f71e83a5-0d99-415e-a715-322fc70233b8\" (UID: \"f71e83a5-0d99-415e-a715-322fc70233b8\") " Jul 2 00:23:25.510408 kubelet[3384]: I0702 00:23:25.510084 3384 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-cni-bin-dir\") pod \"f71e83a5-0d99-415e-a715-322fc70233b8\" (UID: \"f71e83a5-0d99-415e-a715-322fc70233b8\") " Jul 2 00:23:25.510408 kubelet[3384]: I0702 00:23:25.510114 3384 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-cni-log-dir\") pod \"f71e83a5-0d99-415e-a715-322fc70233b8\" (UID: \"f71e83a5-0d99-415e-a715-322fc70233b8\") " Jul 2 00:23:25.510408 kubelet[3384]: I0702 00:23:25.510141 3384 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-policysync\") pod \"f71e83a5-0d99-415e-a715-322fc70233b8\" (UID: \"f71e83a5-0d99-415e-a715-322fc70233b8\") " Jul 2 00:23:25.510408 kubelet[3384]: I0702 00:23:25.510168 3384 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-lib-modules\") pod \"f71e83a5-0d99-415e-a715-322fc70233b8\" (UID: \"f71e83a5-0d99-415e-a715-322fc70233b8\") " Jul 2 00:23:25.510408 kubelet[3384]: I0702 00:23:25.510205 3384 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-flexvol-driver-host\") pod \"f71e83a5-0d99-415e-a715-322fc70233b8\" (UID: \"f71e83a5-0d99-415e-a715-322fc70233b8\") " Jul 2 00:23:25.510806 kubelet[3384]: I0702 00:23:25.510233 3384 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-xtables-lock\") pod \"f71e83a5-0d99-415e-a715-322fc70233b8\" (UID: \"f71e83a5-0d99-415e-a715-322fc70233b8\") " Jul 2 00:23:25.510806 kubelet[3384]: I0702 00:23:25.510268 3384 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-var-lib-calico\") pod \"f71e83a5-0d99-415e-a715-322fc70233b8\" (UID: \"f71e83a5-0d99-415e-a715-322fc70233b8\") " Jul 2 00:23:25.510806 kubelet[3384]: I0702 00:23:25.510299 3384 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f71e83a5-0d99-415e-a715-322fc70233b8-tigera-ca-bundle\") pod \"f71e83a5-0d99-415e-a715-322fc70233b8\" (UID: \"f71e83a5-0d99-415e-a715-322fc70233b8\") " Jul 2 00:23:25.510806 kubelet[3384]: I0702 00:23:25.510335 3384 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f71e83a5-0d99-415e-a715-322fc70233b8-node-certs\") pod \"f71e83a5-0d99-415e-a715-322fc70233b8\" (UID: \"f71e83a5-0d99-415e-a715-322fc70233b8\") " Jul 2 00:23:25.510806 kubelet[3384]: I0702 00:23:25.510368 3384 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-cni-net-dir\") pod \"f71e83a5-0d99-415e-a715-322fc70233b8\" (UID: \"f71e83a5-0d99-415e-a715-322fc70233b8\") " Jul 2 00:23:25.510806 kubelet[3384]: I0702 00:23:25.510438 3384 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "f71e83a5-0d99-415e-a715-322fc70233b8" (UID: "f71e83a5-0d99-415e-a715-322fc70233b8"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:23:25.511145 kubelet[3384]: I0702 00:23:25.510490 3384 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "f71e83a5-0d99-415e-a715-322fc70233b8" (UID: "f71e83a5-0d99-415e-a715-322fc70233b8"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:23:25.511145 kubelet[3384]: I0702 00:23:25.510520 3384 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "f71e83a5-0d99-415e-a715-322fc70233b8" (UID: "f71e83a5-0d99-415e-a715-322fc70233b8"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:23:25.511145 kubelet[3384]: I0702 00:23:25.510544 3384 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "f71e83a5-0d99-415e-a715-322fc70233b8" (UID: "f71e83a5-0d99-415e-a715-322fc70233b8"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:23:25.511145 kubelet[3384]: I0702 00:23:25.510569 3384 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-policysync" (OuterVolumeSpecName: "policysync") pod "f71e83a5-0d99-415e-a715-322fc70233b8" (UID: "f71e83a5-0d99-415e-a715-322fc70233b8"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:23:25.511145 kubelet[3384]: I0702 00:23:25.510593 3384 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f71e83a5-0d99-415e-a715-322fc70233b8" (UID: "f71e83a5-0d99-415e-a715-322fc70233b8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:23:25.511423 kubelet[3384]: I0702 00:23:25.510621 3384 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "f71e83a5-0d99-415e-a715-322fc70233b8" (UID: "f71e83a5-0d99-415e-a715-322fc70233b8"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:23:25.511423 kubelet[3384]: I0702 00:23:25.510646 3384 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f71e83a5-0d99-415e-a715-322fc70233b8" (UID: "f71e83a5-0d99-415e-a715-322fc70233b8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:23:25.511423 kubelet[3384]: I0702 00:23:25.510711 3384 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "f71e83a5-0d99-415e-a715-322fc70233b8" (UID: "f71e83a5-0d99-415e-a715-322fc70233b8"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:23:25.511423 kubelet[3384]: I0702 00:23:25.511245 3384 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f71e83a5-0d99-415e-a715-322fc70233b8-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "f71e83a5-0d99-415e-a715-322fc70233b8" (UID: "f71e83a5-0d99-415e-a715-322fc70233b8"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:23:25.519445 kubelet[3384]: I0702 00:23:25.519403 3384 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f71e83a5-0d99-415e-a715-322fc70233b8-node-certs" (OuterVolumeSpecName: "node-certs") pod "f71e83a5-0d99-415e-a715-322fc70233b8" (UID: "f71e83a5-0d99-415e-a715-322fc70233b8"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:23:25.519594 kubelet[3384]: I0702 00:23:25.519415 3384 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f71e83a5-0d99-415e-a715-322fc70233b8-kube-api-access-cz958" (OuterVolumeSpecName: "kube-api-access-cz958") pod "f71e83a5-0d99-415e-a715-322fc70233b8" (UID: "f71e83a5-0d99-415e-a715-322fc70233b8"). InnerVolumeSpecName "kube-api-access-cz958". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:23:25.519916 systemd[1]: var-lib-kubelet-pods-f71e83a5\x2d0d99\x2d415e\x2da715\x2d322fc70233b8-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jul 2 00:23:25.524072 systemd[1]: var-lib-kubelet-pods-f71e83a5\x2d0d99\x2d415e\x2da715\x2d322fc70233b8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcz958.mount: Deactivated successfully. Jul 2 00:23:25.611493 kubelet[3384]: I0702 00:23:25.611344 3384 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-var-run-calico\") on node \"ci-3975.1.1-a-106c6d4ee2\" DevicePath \"\"" Jul 2 00:23:25.611493 kubelet[3384]: I0702 00:23:25.611395 3384 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-cni-bin-dir\") on node \"ci-3975.1.1-a-106c6d4ee2\" DevicePath \"\"" Jul 2 00:23:25.611493 kubelet[3384]: I0702 00:23:25.611415 3384 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-cni-log-dir\") on node \"ci-3975.1.1-a-106c6d4ee2\" DevicePath \"\"" Jul 2 00:23:25.611493 kubelet[3384]: I0702 00:23:25.611433 3384 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-policysync\") on node \"ci-3975.1.1-a-106c6d4ee2\" DevicePath \"\"" Jul 2 00:23:25.611493 kubelet[3384]: I0702 00:23:25.611449 3384 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-lib-modules\") on node \"ci-3975.1.1-a-106c6d4ee2\" DevicePath \"\"" Jul 2 00:23:25.611493 kubelet[3384]: I0702 00:23:25.611470 3384 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-flexvol-driver-host\") on node \"ci-3975.1.1-a-106c6d4ee2\" DevicePath \"\"" Jul 2 00:23:25.611493 kubelet[3384]: I0702 00:23:25.611486 3384 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-xtables-lock\") on node \"ci-3975.1.1-a-106c6d4ee2\" DevicePath \"\"" Jul 2 00:23:25.611493 kubelet[3384]: I0702 00:23:25.611504 3384 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-var-lib-calico\") on node \"ci-3975.1.1-a-106c6d4ee2\" DevicePath \"\"" Jul 2 00:23:25.612099 kubelet[3384]: I0702 00:23:25.611521 3384 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f71e83a5-0d99-415e-a715-322fc70233b8-tigera-ca-bundle\") on node \"ci-3975.1.1-a-106c6d4ee2\" DevicePath \"\"" Jul 2 00:23:25.612099 kubelet[3384]: I0702 00:23:25.611536 3384 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f71e83a5-0d99-415e-a715-322fc70233b8-node-certs\") on node \"ci-3975.1.1-a-106c6d4ee2\" DevicePath \"\"" Jul 2 00:23:25.612099 kubelet[3384]: I0702 00:23:25.611551 3384 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f71e83a5-0d99-415e-a715-322fc70233b8-cni-net-dir\") on node \"ci-3975.1.1-a-106c6d4ee2\" DevicePath \"\"" Jul 2 00:23:25.612099 kubelet[3384]: I0702 00:23:25.611568 3384 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cz958\" (UniqueName: \"kubernetes.io/projected/f71e83a5-0d99-415e-a715-322fc70233b8-kube-api-access-cz958\") on node \"ci-3975.1.1-a-106c6d4ee2\" DevicePath \"\"" Jul 2 00:23:26.164336 kubelet[3384]: E0702 00:23:26.164291 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-586k2" podUID="681633ee-4999-4e86-b7fb-b802b78615ed" Jul 2 00:23:26.296710 kubelet[3384]: I0702 00:23:26.296115 3384 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:23:26.296710 kubelet[3384]: I0702 00:23:26.296183 3384 scope.go:117] "RemoveContainer" containerID="2596fe4a9fc06ae46c4aae58e0d12786a3713a3fcba7a5432667a4fa294112f1" Jul 2 00:23:26.302460 containerd[1837]: time="2024-07-02T00:23:26.301685237Z" level=info msg="RemoveContainer for \"2596fe4a9fc06ae46c4aae58e0d12786a3713a3fcba7a5432667a4fa294112f1\"" Jul 2 00:23:26.309788 containerd[1837]: time="2024-07-02T00:23:26.309671160Z" level=info msg="RemoveContainer for \"2596fe4a9fc06ae46c4aae58e0d12786a3713a3fcba7a5432667a4fa294112f1\" returns successfully" Jul 2 00:23:26.337854 kubelet[3384]: I0702 00:23:26.337058 3384 topology_manager.go:215] "Topology Admit Handler" podUID="0e35611a-8091-4421-bc8a-3569225c4cf6" podNamespace="calico-system" podName="calico-node-npkjr" Jul 2 00:23:26.340238 kubelet[3384]: E0702 00:23:26.338367 3384 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f71e83a5-0d99-415e-a715-322fc70233b8" containerName="flexvol-driver" Jul 2 00:23:26.340238 kubelet[3384]: I0702 00:23:26.338417 3384 memory_manager.go:346] "RemoveStaleState removing state" podUID="f71e83a5-0d99-415e-a715-322fc70233b8" containerName="flexvol-driver" Jul 2 00:23:26.417221 kubelet[3384]: I0702 00:23:26.417039 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0e35611a-8091-4421-bc8a-3569225c4cf6-var-run-calico\") pod \"calico-node-npkjr\" (UID: \"0e35611a-8091-4421-bc8a-3569225c4cf6\") " pod="calico-system/calico-node-npkjr" Jul 2 00:23:26.417221 kubelet[3384]: I0702 00:23:26.417183 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0e35611a-8091-4421-bc8a-3569225c4cf6-cni-bin-dir\") pod \"calico-node-npkjr\" (UID: \"0e35611a-8091-4421-bc8a-3569225c4cf6\") " pod="calico-system/calico-node-npkjr" Jul 2 00:23:26.417451 kubelet[3384]: I0702 00:23:26.417246 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0e35611a-8091-4421-bc8a-3569225c4cf6-node-certs\") pod \"calico-node-npkjr\" (UID: \"0e35611a-8091-4421-bc8a-3569225c4cf6\") " pod="calico-system/calico-node-npkjr" Jul 2 00:23:26.417451 kubelet[3384]: I0702 00:23:26.417297 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0e35611a-8091-4421-bc8a-3569225c4cf6-policysync\") pod \"calico-node-npkjr\" (UID: \"0e35611a-8091-4421-bc8a-3569225c4cf6\") " pod="calico-system/calico-node-npkjr" Jul 2 00:23:26.417451 kubelet[3384]: I0702 00:23:26.417333 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6dd9\" (UniqueName: \"kubernetes.io/projected/0e35611a-8091-4421-bc8a-3569225c4cf6-kube-api-access-r6dd9\") pod \"calico-node-npkjr\" (UID: \"0e35611a-8091-4421-bc8a-3569225c4cf6\") " pod="calico-system/calico-node-npkjr" Jul 2 00:23:26.417451 kubelet[3384]: I0702 00:23:26.417368 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0e35611a-8091-4421-bc8a-3569225c4cf6-flexvol-driver-host\") pod \"calico-node-npkjr\" (UID: \"0e35611a-8091-4421-bc8a-3569225c4cf6\") " pod="calico-system/calico-node-npkjr" Jul 2 00:23:26.417451 kubelet[3384]: I0702 00:23:26.417399 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e35611a-8091-4421-bc8a-3569225c4cf6-xtables-lock\") pod \"calico-node-npkjr\" (UID: \"0e35611a-8091-4421-bc8a-3569225c4cf6\") " pod="calico-system/calico-node-npkjr" Jul 2 00:23:26.419087 kubelet[3384]: I0702 00:23:26.417467 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0e35611a-8091-4421-bc8a-3569225c4cf6-cni-net-dir\") pod \"calico-node-npkjr\" (UID: \"0e35611a-8091-4421-bc8a-3569225c4cf6\") " pod="calico-system/calico-node-npkjr" Jul 2 00:23:26.419087 kubelet[3384]: I0702 00:23:26.417501 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0e35611a-8091-4421-bc8a-3569225c4cf6-cni-log-dir\") pod \"calico-node-npkjr\" (UID: \"0e35611a-8091-4421-bc8a-3569225c4cf6\") " pod="calico-system/calico-node-npkjr" Jul 2 00:23:26.419087 kubelet[3384]: I0702 00:23:26.417542 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e35611a-8091-4421-bc8a-3569225c4cf6-lib-modules\") pod \"calico-node-npkjr\" (UID: \"0e35611a-8091-4421-bc8a-3569225c4cf6\") " pod="calico-system/calico-node-npkjr" Jul 2 00:23:26.419087 kubelet[3384]: I0702 00:23:26.417578 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e35611a-8091-4421-bc8a-3569225c4cf6-tigera-ca-bundle\") pod \"calico-node-npkjr\" (UID: \"0e35611a-8091-4421-bc8a-3569225c4cf6\") " pod="calico-system/calico-node-npkjr" Jul 2 00:23:26.419087 kubelet[3384]: I0702 00:23:26.417617 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0e35611a-8091-4421-bc8a-3569225c4cf6-var-lib-calico\") pod \"calico-node-npkjr\" (UID: \"0e35611a-8091-4421-bc8a-3569225c4cf6\") " pod="calico-system/calico-node-npkjr" Jul 2 00:23:26.645007 containerd[1837]: time="2024-07-02T00:23:26.644955319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-npkjr,Uid:0e35611a-8091-4421-bc8a-3569225c4cf6,Namespace:calico-system,Attempt:0,}" Jul 2 00:23:26.688136 containerd[1837]: time="2024-07-02T00:23:26.687758477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:23:26.688136 containerd[1837]: time="2024-07-02T00:23:26.687817078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:26.688136 containerd[1837]: time="2024-07-02T00:23:26.687843879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:23:26.688136 containerd[1837]: time="2024-07-02T00:23:26.687864179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:26.725654 containerd[1837]: time="2024-07-02T00:23:26.725609660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-npkjr,Uid:0e35611a-8091-4421-bc8a-3569225c4cf6,Namespace:calico-system,Attempt:0,} returns sandbox id \"921c724a062a420c2b2696601ce662e417bcfe45ac016a94a52c0d3abf9df9e4\"" Jul 2 00:23:26.728260 containerd[1837]: time="2024-07-02T00:23:26.728218400Z" level=info msg="CreateContainer within sandbox \"921c724a062a420c2b2696601ce662e417bcfe45ac016a94a52c0d3abf9df9e4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 00:23:26.764512 containerd[1837]: time="2024-07-02T00:23:26.764408857Z" level=info msg="CreateContainer within sandbox \"921c724a062a420c2b2696601ce662e417bcfe45ac016a94a52c0d3abf9df9e4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ab3baa9df0e7b44d34abb4a812bc7a21afc21cb889cb0c773f54baedfa8cebe7\"" Jul 2 00:23:26.765538 containerd[1837]: time="2024-07-02T00:23:26.765126368Z" level=info msg="StartContainer for \"ab3baa9df0e7b44d34abb4a812bc7a21afc21cb889cb0c773f54baedfa8cebe7\"" Jul 2 00:23:26.821783 containerd[1837]: time="2024-07-02T00:23:26.821738639Z" level=info msg="StartContainer for \"ab3baa9df0e7b44d34abb4a812bc7a21afc21cb889cb0c773f54baedfa8cebe7\" returns successfully" Jul 2 00:23:26.904804 containerd[1837]: time="2024-07-02T00:23:26.904738116Z" level=info msg="shim disconnected" id=ab3baa9df0e7b44d34abb4a812bc7a21afc21cb889cb0c773f54baedfa8cebe7 namespace=k8s.io Jul 2 00:23:26.904804 containerd[1837]: time="2024-07-02T00:23:26.904801517Z" level=warning msg="cleaning up after shim disconnected" id=ab3baa9df0e7b44d34abb4a812bc7a21afc21cb889cb0c773f54baedfa8cebe7 namespace=k8s.io Jul 2 00:23:26.904804 containerd[1837]: time="2024-07-02T00:23:26.904812217Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:23:27.169139 kubelet[3384]: I0702 00:23:27.168834 3384 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f71e83a5-0d99-415e-a715-322fc70233b8" path="/var/lib/kubelet/pods/f71e83a5-0d99-415e-a715-322fc70233b8/volumes" Jul 2 00:23:27.307356 containerd[1837]: time="2024-07-02T00:23:27.307179409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 00:23:28.163972 kubelet[3384]: E0702 00:23:28.163922 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-586k2" podUID="681633ee-4999-4e86-b7fb-b802b78615ed" Jul 2 00:23:30.163541 kubelet[3384]: E0702 00:23:30.163491 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-586k2" podUID="681633ee-4999-4e86-b7fb-b802b78615ed" Jul 2 00:23:32.164034 kubelet[3384]: E0702 00:23:32.163981 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-586k2" podUID="681633ee-4999-4e86-b7fb-b802b78615ed" Jul 2 00:23:34.164020 kubelet[3384]: E0702 00:23:34.163968 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-586k2" podUID="681633ee-4999-4e86-b7fb-b802b78615ed" Jul 2 00:23:34.642116 kubelet[3384]: I0702 00:23:34.641363 3384 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:23:36.163840 kubelet[3384]: E0702 00:23:36.163781 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-586k2" podUID="681633ee-4999-4e86-b7fb-b802b78615ed" Jul 2 00:23:38.163750 kubelet[3384]: E0702 00:23:38.163694 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-586k2" podUID="681633ee-4999-4e86-b7fb-b802b78615ed" Jul 2 00:23:40.163992 kubelet[3384]: E0702 00:23:40.163917 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-586k2" podUID="681633ee-4999-4e86-b7fb-b802b78615ed" Jul 2 00:23:42.164259 kubelet[3384]: E0702 00:23:42.164203 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-586k2" podUID="681633ee-4999-4e86-b7fb-b802b78615ed" Jul 2 00:23:44.164102 kubelet[3384]: E0702 00:23:44.164049 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-586k2" podUID="681633ee-4999-4e86-b7fb-b802b78615ed" Jul 2 00:23:46.163913 kubelet[3384]: E0702 00:23:46.163859 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-586k2" podUID="681633ee-4999-4e86-b7fb-b802b78615ed" Jul 2 00:23:48.163698 kubelet[3384]: E0702 00:23:48.163603 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-586k2" podUID="681633ee-4999-4e86-b7fb-b802b78615ed" Jul 2 00:23:50.164224 kubelet[3384]: E0702 00:23:50.164175 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-586k2" podUID="681633ee-4999-4e86-b7fb-b802b78615ed" Jul 2 00:23:52.164453 kubelet[3384]: E0702 00:23:52.164418 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-586k2" podUID="681633ee-4999-4e86-b7fb-b802b78615ed" Jul 2 00:23:54.164153 kubelet[3384]: E0702 00:23:54.164121 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-586k2" podUID="681633ee-4999-4e86-b7fb-b802b78615ed" Jul 2 00:23:54.343913 containerd[1837]: time="2024-07-02T00:23:54.343861889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:54.345930 containerd[1837]: time="2024-07-02T00:23:54.345872820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jul 2 00:23:54.349554 containerd[1837]: time="2024-07-02T00:23:54.349503175Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:54.354583 containerd[1837]: time="2024-07-02T00:23:54.354534951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:54.355776 containerd[1837]: time="2024-07-02T00:23:54.355187561Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 27.04787375s" Jul 2 00:23:54.355776 containerd[1837]: time="2024-07-02T00:23:54.355226962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jul 2 00:23:54.357703 containerd[1837]: time="2024-07-02T00:23:54.357544697Z" level=info msg="CreateContainer within sandbox \"921c724a062a420c2b2696601ce662e417bcfe45ac016a94a52c0d3abf9df9e4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 00:23:54.409832 containerd[1837]: time="2024-07-02T00:23:54.409788092Z" level=info msg="CreateContainer within sandbox \"921c724a062a420c2b2696601ce662e417bcfe45ac016a94a52c0d3abf9df9e4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"dee354d607e24bdcc0fc00ab44ad800c68046b5f4f83a1cf8873c14292e04f41\"" Jul 2 00:23:54.410502 containerd[1837]: time="2024-07-02T00:23:54.410318800Z" level=info msg="StartContainer for \"dee354d607e24bdcc0fc00ab44ad800c68046b5f4f83a1cf8873c14292e04f41\"" Jul 2 00:23:54.444364 systemd[1]: run-containerd-runc-k8s.io-dee354d607e24bdcc0fc00ab44ad800c68046b5f4f83a1cf8873c14292e04f41-runc.4I8oTe.mount: Deactivated successfully. Jul 2 00:23:54.479687 containerd[1837]: time="2024-07-02T00:23:54.479615953Z" level=info msg="StartContainer for \"dee354d607e24bdcc0fc00ab44ad800c68046b5f4f83a1cf8873c14292e04f41\" returns successfully" Jul 2 00:23:55.788578 containerd[1837]: time="2024-07-02T00:23:55.788515757Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:23:55.803809 kubelet[3384]: I0702 00:23:55.803132 3384 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 00:23:55.825325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dee354d607e24bdcc0fc00ab44ad800c68046b5f4f83a1cf8873c14292e04f41-rootfs.mount: Deactivated successfully. Jul 2 00:23:55.836827 kubelet[3384]: I0702 00:23:55.836643 3384 topology_manager.go:215] "Topology Admit Handler" podUID="cc23b9a9-d7c3-49ee-a8ed-e6e1f5727710" podNamespace="kube-system" podName="coredns-5dd5756b68-c8j52" Jul 2 00:23:55.856266 kubelet[3384]: I0702 00:23:55.850985 3384 topology_manager.go:215] "Topology Admit Handler" podUID="5d2957c7-4985-4bb6-a0d2-a9fd7359e6dc" podNamespace="kube-system" podName="coredns-5dd5756b68-dxcvb" Jul 2 00:23:55.856266 kubelet[3384]: I0702 00:23:55.851280 3384 topology_manager.go:215] "Topology Admit Handler" podUID="a6d90bf0-0b28-471c-bedf-497b4879ee20" podNamespace="calico-system" podName="calico-kube-controllers-684dd7f97c-xvvr6" Jul 2 00:23:55.946170 kubelet[3384]: I0702 00:23:55.946108 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d72b\" (UniqueName: \"kubernetes.io/projected/a6d90bf0-0b28-471c-bedf-497b4879ee20-kube-api-access-8d72b\") pod \"calico-kube-controllers-684dd7f97c-xvvr6\" (UID: \"a6d90bf0-0b28-471c-bedf-497b4879ee20\") " pod="calico-system/calico-kube-controllers-684dd7f97c-xvvr6" Jul 2 00:23:55.946170 kubelet[3384]: I0702 00:23:55.946182 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc23b9a9-d7c3-49ee-a8ed-e6e1f5727710-config-volume\") pod \"coredns-5dd5756b68-c8j52\" (UID: \"cc23b9a9-d7c3-49ee-a8ed-e6e1f5727710\") " pod="kube-system/coredns-5dd5756b68-c8j52" Jul 2 00:23:55.946439 kubelet[3384]: I0702 00:23:55.946228 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d2957c7-4985-4bb6-a0d2-a9fd7359e6dc-config-volume\") pod \"coredns-5dd5756b68-dxcvb\" (UID: \"5d2957c7-4985-4bb6-a0d2-a9fd7359e6dc\") " pod="kube-system/coredns-5dd5756b68-dxcvb" Jul 2 00:23:55.946439 kubelet[3384]: I0702 00:23:55.946259 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj85j\" (UniqueName: \"kubernetes.io/projected/5d2957c7-4985-4bb6-a0d2-a9fd7359e6dc-kube-api-access-pj85j\") pod \"coredns-5dd5756b68-dxcvb\" (UID: \"5d2957c7-4985-4bb6-a0d2-a9fd7359e6dc\") " pod="kube-system/coredns-5dd5756b68-dxcvb" Jul 2 00:23:55.946439 kubelet[3384]: I0702 00:23:55.946295 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6d90bf0-0b28-471c-bedf-497b4879ee20-tigera-ca-bundle\") pod \"calico-kube-controllers-684dd7f97c-xvvr6\" (UID: \"a6d90bf0-0b28-471c-bedf-497b4879ee20\") " pod="calico-system/calico-kube-controllers-684dd7f97c-xvvr6" Jul 2 00:23:55.946439 kubelet[3384]: I0702 00:23:55.946328 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl2pg\" (UniqueName: \"kubernetes.io/projected/cc23b9a9-d7c3-49ee-a8ed-e6e1f5727710-kube-api-access-kl2pg\") pod \"coredns-5dd5756b68-c8j52\" (UID: \"cc23b9a9-d7c3-49ee-a8ed-e6e1f5727710\") " pod="kube-system/coredns-5dd5756b68-c8j52" Jul 2 00:23:56.965603 containerd[1837]: time="2024-07-02T00:23:56.965551755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-586k2,Uid:681633ee-4999-4e86-b7fb-b802b78615ed,Namespace:calico-system,Attempt:0,}" Jul 2 00:23:57.042305 containerd[1837]: time="2024-07-02T00:23:57.042254622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-c8j52,Uid:cc23b9a9-d7c3-49ee-a8ed-e6e1f5727710,Namespace:kube-system,Attempt:0,}" Jul 2 00:23:57.063373 containerd[1837]: time="2024-07-02T00:23:57.063320742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-dxcvb,Uid:5d2957c7-4985-4bb6-a0d2-a9fd7359e6dc,Namespace:kube-system,Attempt:0,}" Jul 2 00:23:57.065997 containerd[1837]: time="2024-07-02T00:23:57.065914481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-684dd7f97c-xvvr6,Uid:a6d90bf0-0b28-471c-bedf-497b4879ee20,Namespace:calico-system,Attempt:0,}" Jul 2 00:23:57.187279 containerd[1837]: time="2024-07-02T00:23:57.187235026Z" level=info msg="StopPodSandbox for \"4ee5add260dca732f3132f1c842e3189b7eb98ebde4485285e902568e61b948a\"" Jul 2 00:23:57.187475 containerd[1837]: time="2024-07-02T00:23:57.187337728Z" level=info msg="TearDown network for sandbox \"4ee5add260dca732f3132f1c842e3189b7eb98ebde4485285e902568e61b948a\" successfully" Jul 2 00:23:57.187475 containerd[1837]: time="2024-07-02T00:23:57.187353628Z" level=info msg="StopPodSandbox for \"4ee5add260dca732f3132f1c842e3189b7eb98ebde4485285e902568e61b948a\" returns successfully" Jul 2 00:23:57.187793 containerd[1837]: time="2024-07-02T00:23:57.187768134Z" level=info msg="RemovePodSandbox for \"4ee5add260dca732f3132f1c842e3189b7eb98ebde4485285e902568e61b948a\"" Jul 2 00:23:57.187909 containerd[1837]: time="2024-07-02T00:23:57.187798035Z" level=info msg="Forcibly stopping sandbox \"4ee5add260dca732f3132f1c842e3189b7eb98ebde4485285e902568e61b948a\"" Jul 2 00:23:57.187909 containerd[1837]: time="2024-07-02T00:23:57.187856236Z" level=info msg="TearDown network for sandbox \"4ee5add260dca732f3132f1c842e3189b7eb98ebde4485285e902568e61b948a\" successfully" Jul 2 00:23:57.421714 containerd[1837]: time="2024-07-02T00:23:57.421655091Z" level=error msg="collecting metrics for dee354d607e24bdcc0fc00ab44ad800c68046b5f4f83a1cf8873c14292e04f41" error="cgroups: cgroup deleted: unknown" Jul 2 00:23:57.461037 containerd[1837]: time="2024-07-02T00:23:57.460958388Z" level=info msg="shim disconnected" id=dee354d607e24bdcc0fc00ab44ad800c68046b5f4f83a1cf8873c14292e04f41 namespace=k8s.io Jul 2 00:23:57.461037 containerd[1837]: time="2024-07-02T00:23:57.461029190Z" level=warning msg="cleaning up after shim disconnected" id=dee354d607e24bdcc0fc00ab44ad800c68046b5f4f83a1cf8873c14292e04f41 namespace=k8s.io Jul 2 00:23:57.461037 containerd[1837]: time="2024-07-02T00:23:57.461040190Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:23:57.489771 containerd[1837]: time="2024-07-02T00:23:57.489628924Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ee5add260dca732f3132f1c842e3189b7eb98ebde4485285e902568e61b948a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:23:57.489771 containerd[1837]: time="2024-07-02T00:23:57.489710226Z" level=info msg="RemovePodSandbox \"4ee5add260dca732f3132f1c842e3189b7eb98ebde4485285e902568e61b948a\" returns successfully" Jul 2 00:23:57.490300 containerd[1837]: time="2024-07-02T00:23:57.490270134Z" level=info msg="StopPodSandbox for \"17d0933b39accf036ce72cbb8d718c0b2d7798bd9f3cda8af56fe209f71a9be4\"" Jul 2 00:23:57.490404 containerd[1837]: time="2024-07-02T00:23:57.490355836Z" level=info msg="TearDown network for sandbox \"17d0933b39accf036ce72cbb8d718c0b2d7798bd9f3cda8af56fe209f71a9be4\" successfully" Jul 2 00:23:57.490404 containerd[1837]: time="2024-07-02T00:23:57.490370936Z" level=info msg="StopPodSandbox for \"17d0933b39accf036ce72cbb8d718c0b2d7798bd9f3cda8af56fe209f71a9be4\" returns successfully" Jul 2 00:23:57.490695 containerd[1837]: time="2024-07-02T00:23:57.490653940Z" level=info msg="RemovePodSandbox for \"17d0933b39accf036ce72cbb8d718c0b2d7798bd9f3cda8af56fe209f71a9be4\"" Jul 2 00:23:57.490786 containerd[1837]: time="2024-07-02T00:23:57.490700241Z" level=info msg="Forcibly stopping sandbox \"17d0933b39accf036ce72cbb8d718c0b2d7798bd9f3cda8af56fe209f71a9be4\"" Jul 2 00:23:57.490839 containerd[1837]: time="2024-07-02T00:23:57.490759342Z" level=info msg="TearDown network for sandbox \"17d0933b39accf036ce72cbb8d718c0b2d7798bd9f3cda8af56fe209f71a9be4\" successfully" Jul 2 00:23:57.532289 containerd[1837]: time="2024-07-02T00:23:57.529165926Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"17d0933b39accf036ce72cbb8d718c0b2d7798bd9f3cda8af56fe209f71a9be4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:23:57.532289 containerd[1837]: time="2024-07-02T00:23:57.529252927Z" level=info msg="RemovePodSandbox \"17d0933b39accf036ce72cbb8d718c0b2d7798bd9f3cda8af56fe209f71a9be4\" returns successfully" Jul 2 00:23:57.675392 containerd[1837]: time="2024-07-02T00:23:57.674722739Z" level=error msg="Failed to destroy network for sandbox \"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.676914 containerd[1837]: time="2024-07-02T00:23:57.676008359Z" level=error msg="encountered an error cleaning up failed sandbox \"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.676914 containerd[1837]: time="2024-07-02T00:23:57.676084660Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-586k2,Uid:681633ee-4999-4e86-b7fb-b802b78615ed,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.677126 kubelet[3384]: E0702 00:23:57.676421 3384 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.677126 kubelet[3384]: E0702 00:23:57.676506 3384 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-586k2" Jul 2 00:23:57.677126 kubelet[3384]: E0702 00:23:57.676536 3384 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-586k2" Jul 2 00:23:57.678568 kubelet[3384]: E0702 00:23:57.676601 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-586k2_calico-system(681633ee-4999-4e86-b7fb-b802b78615ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-586k2_calico-system(681633ee-4999-4e86-b7fb-b802b78615ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-586k2" podUID="681633ee-4999-4e86-b7fb-b802b78615ed" Jul 2 00:23:57.703161 containerd[1837]: time="2024-07-02T00:23:57.703113971Z" level=error msg="Failed to destroy network for sandbox \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.703955 containerd[1837]: time="2024-07-02T00:23:57.703437376Z" level=error msg="Failed to destroy network for sandbox \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.703955 containerd[1837]: time="2024-07-02T00:23:57.703908583Z" level=error msg="encountered an error cleaning up failed sandbox \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.704106 containerd[1837]: time="2024-07-02T00:23:57.703966584Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-c8j52,Uid:cc23b9a9-d7c3-49ee-a8ed-e6e1f5727710,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.704321 containerd[1837]: time="2024-07-02T00:23:57.704286089Z" level=error msg="encountered an error cleaning up failed sandbox \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.704402 containerd[1837]: time="2024-07-02T00:23:57.704338489Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-684dd7f97c-xvvr6,Uid:a6d90bf0-0b28-471c-bedf-497b4879ee20,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.705933 kubelet[3384]: E0702 00:23:57.704613 3384 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.705933 kubelet[3384]: E0702 00:23:57.704689 3384 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.705933 kubelet[3384]: E0702 00:23:57.704726 3384 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-c8j52" Jul 2 00:23:57.705933 kubelet[3384]: E0702 00:23:57.704753 3384 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-c8j52" Jul 2 00:23:57.706172 kubelet[3384]: E0702 00:23:57.704821 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-c8j52_kube-system(cc23b9a9-d7c3-49ee-a8ed-e6e1f5727710)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-c8j52_kube-system(cc23b9a9-d7c3-49ee-a8ed-e6e1f5727710)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-c8j52" podUID="cc23b9a9-d7c3-49ee-a8ed-e6e1f5727710" Jul 2 00:23:57.707315 kubelet[3384]: E0702 00:23:57.706318 3384 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-684dd7f97c-xvvr6" Jul 2 00:23:57.707315 kubelet[3384]: E0702 00:23:57.706362 3384 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-684dd7f97c-xvvr6" Jul 2 00:23:57.707315 kubelet[3384]: E0702 00:23:57.706429 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-684dd7f97c-xvvr6_calico-system(a6d90bf0-0b28-471c-bedf-497b4879ee20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-684dd7f97c-xvvr6_calico-system(a6d90bf0-0b28-471c-bedf-497b4879ee20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-684dd7f97c-xvvr6" podUID="a6d90bf0-0b28-471c-bedf-497b4879ee20" Jul 2 00:23:57.709342 containerd[1837]: time="2024-07-02T00:23:57.709054961Z" level=error msg="Failed to destroy network for sandbox \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.709545 containerd[1837]: time="2024-07-02T00:23:57.709481768Z" level=error msg="encountered an error cleaning up failed sandbox \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.709756 containerd[1837]: time="2024-07-02T00:23:57.709686271Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-dxcvb,Uid:5d2957c7-4985-4bb6-a0d2-a9fd7359e6dc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.710077 kubelet[3384]: E0702 00:23:57.710017 3384 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:57.710224 kubelet[3384]: E0702 00:23:57.710059 3384 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-dxcvb" Jul 2 00:23:57.710224 kubelet[3384]: E0702 00:23:57.710192 3384 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-dxcvb" Jul 2 00:23:57.710349 kubelet[3384]: E0702 00:23:57.710263 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-dxcvb_kube-system(5d2957c7-4985-4bb6-a0d2-a9fd7359e6dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-dxcvb_kube-system(5d2957c7-4985-4bb6-a0d2-a9fd7359e6dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-dxcvb" podUID="5d2957c7-4985-4bb6-a0d2-a9fd7359e6dc" Jul 2 00:23:57.966374 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7-shm.mount: Deactivated successfully. Jul 2 00:23:57.966551 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5-shm.mount: Deactivated successfully. Jul 2 00:23:58.369544 kubelet[3384]: I0702 00:23:58.369365 3384 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" Jul 2 00:23:58.371005 containerd[1837]: time="2024-07-02T00:23:58.370777023Z" level=info msg="StopPodSandbox for \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\"" Jul 2 00:23:58.372242 containerd[1837]: time="2024-07-02T00:23:58.372106144Z" level=info msg="Ensure that sandbox a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7 in task-service has been cleanup successfully" Jul 2 00:23:58.372294 kubelet[3384]: I0702 00:23:58.371539 3384 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" Jul 2 00:23:58.372947 containerd[1837]: time="2024-07-02T00:23:58.372508550Z" level=info msg="StopPodSandbox for \"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\"" Jul 2 00:23:58.372947 containerd[1837]: time="2024-07-02T00:23:58.372809954Z" level=info msg="Ensure that sandbox 32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5 in task-service has been cleanup successfully" Jul 2 00:23:58.392819 containerd[1837]: time="2024-07-02T00:23:58.392769558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 00:23:58.394925 kubelet[3384]: I0702 00:23:58.394304 3384 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" Jul 2 00:23:58.396082 containerd[1837]: time="2024-07-02T00:23:58.396056208Z" level=info msg="StopPodSandbox for \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\"" Jul 2 00:23:58.397110 containerd[1837]: time="2024-07-02T00:23:58.397085323Z" level=info msg="Ensure that sandbox fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f in task-service has been cleanup successfully" Jul 2 00:23:58.400683 kubelet[3384]: I0702 00:23:58.400608 3384 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" Jul 2 00:23:58.402995 containerd[1837]: time="2024-07-02T00:23:58.402517306Z" level=info msg="StopPodSandbox for \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\"" Jul 2 00:23:58.403943 containerd[1837]: time="2024-07-02T00:23:58.403916027Z" level=info msg="Ensure that sandbox 245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b in task-service has been cleanup successfully" Jul 2 00:23:58.456622 containerd[1837]: time="2024-07-02T00:23:58.456565628Z" level=error msg="StopPodSandbox for \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\" failed" error="failed to destroy network for sandbox \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:58.456945 kubelet[3384]: E0702 00:23:58.456868 3384 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" Jul 2 00:23:58.457072 kubelet[3384]: E0702 00:23:58.456999 3384 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b"} Jul 2 00:23:58.457072 kubelet[3384]: E0702 00:23:58.457055 3384 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5d2957c7-4985-4bb6-a0d2-a9fd7359e6dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:23:58.457224 kubelet[3384]: E0702 00:23:58.457107 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5d2957c7-4985-4bb6-a0d2-a9fd7359e6dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-dxcvb" podUID="5d2957c7-4985-4bb6-a0d2-a9fd7359e6dc" Jul 2 00:23:58.471922 containerd[1837]: time="2024-07-02T00:23:58.471852260Z" level=error msg="StopPodSandbox for \"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\" failed" error="failed to destroy network for sandbox \"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:58.472562 kubelet[3384]: E0702 00:23:58.472528 3384 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" Jul 2 00:23:58.472776 kubelet[3384]: E0702 00:23:58.472579 3384 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5"} Jul 2 00:23:58.472776 kubelet[3384]: E0702 00:23:58.472624 3384 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"681633ee-4999-4e86-b7fb-b802b78615ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:23:58.472776 kubelet[3384]: E0702 00:23:58.472676 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"681633ee-4999-4e86-b7fb-b802b78615ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-586k2" podUID="681633ee-4999-4e86-b7fb-b802b78615ed" Jul 2 00:23:58.472995 containerd[1837]: time="2024-07-02T00:23:58.472794075Z" level=error msg="StopPodSandbox for \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\" failed" error="failed to destroy network for sandbox \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:58.473239 kubelet[3384]: E0702 00:23:58.473062 3384 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" Jul 2 00:23:58.473239 kubelet[3384]: E0702 00:23:58.473097 3384 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7"} Jul 2 00:23:58.473239 kubelet[3384]: E0702 00:23:58.473157 3384 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cc23b9a9-d7c3-49ee-a8ed-e6e1f5727710\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:23:58.473239 kubelet[3384]: E0702 00:23:58.473207 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cc23b9a9-d7c3-49ee-a8ed-e6e1f5727710\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-c8j52" podUID="cc23b9a9-d7c3-49ee-a8ed-e6e1f5727710" Jul 2 00:23:58.480825 containerd[1837]: time="2024-07-02T00:23:58.480785696Z" level=error msg="StopPodSandbox for \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\" failed" error="failed to destroy network for sandbox \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:23:58.481053 kubelet[3384]: E0702 00:23:58.481007 3384 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" Jul 2 00:23:58.481053 kubelet[3384]: E0702 00:23:58.481041 3384 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f"} Jul 2 00:23:58.481169 kubelet[3384]: E0702 00:23:58.481082 3384 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a6d90bf0-0b28-471c-bedf-497b4879ee20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:23:58.481169 kubelet[3384]: E0702 00:23:58.481117 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a6d90bf0-0b28-471c-bedf-497b4879ee20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-684dd7f97c-xvvr6" podUID="a6d90bf0-0b28-471c-bedf-497b4879ee20" Jul 2 00:24:11.166950 containerd[1837]: time="2024-07-02T00:24:11.166462877Z" level=info msg="StopPodSandbox for \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\"" Jul 2 00:24:11.167703 containerd[1837]: time="2024-07-02T00:24:11.167035586Z" level=info msg="StopPodSandbox for \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\"" Jul 2 00:24:11.209986 containerd[1837]: time="2024-07-02T00:24:11.209537151Z" level=error msg="StopPodSandbox for \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\" failed" error="failed to destroy network for sandbox \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:11.210245 kubelet[3384]: E0702 00:24:11.209831 3384 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" Jul 2 00:24:11.210245 kubelet[3384]: E0702 00:24:11.209882 3384 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f"} Jul 2 00:24:11.210245 kubelet[3384]: E0702 00:24:11.209929 3384 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a6d90bf0-0b28-471c-bedf-497b4879ee20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:24:11.210933 kubelet[3384]: E0702 00:24:11.210862 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a6d90bf0-0b28-471c-bedf-497b4879ee20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-684dd7f97c-xvvr6" podUID="a6d90bf0-0b28-471c-bedf-497b4879ee20" Jul 2 00:24:11.211621 containerd[1837]: time="2024-07-02T00:24:11.211570483Z" level=error msg="StopPodSandbox for \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\" failed" error="failed to destroy network for sandbox \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:11.211807 kubelet[3384]: E0702 00:24:11.211782 3384 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" Jul 2 00:24:11.211896 kubelet[3384]: E0702 00:24:11.211824 3384 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7"} Jul 2 00:24:11.211896 kubelet[3384]: E0702 00:24:11.211870 3384 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cc23b9a9-d7c3-49ee-a8ed-e6e1f5727710\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:24:11.212012 kubelet[3384]: E0702 00:24:11.211908 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cc23b9a9-d7c3-49ee-a8ed-e6e1f5727710\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-c8j52" podUID="cc23b9a9-d7c3-49ee-a8ed-e6e1f5727710" Jul 2 00:24:12.165409 containerd[1837]: time="2024-07-02T00:24:12.164946705Z" level=info msg="StopPodSandbox for \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\"" Jul 2 00:24:12.191597 containerd[1837]: time="2024-07-02T00:24:12.191505121Z" level=error msg="StopPodSandbox for \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\" failed" error="failed to destroy network for sandbox \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:12.192390 kubelet[3384]: E0702 00:24:12.192136 3384 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" Jul 2 00:24:12.192390 kubelet[3384]: E0702 00:24:12.192196 3384 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b"} Jul 2 00:24:12.192390 kubelet[3384]: E0702 00:24:12.192240 3384 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5d2957c7-4985-4bb6-a0d2-a9fd7359e6dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:24:12.192390 kubelet[3384]: E0702 00:24:12.192279 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5d2957c7-4985-4bb6-a0d2-a9fd7359e6dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-dxcvb" podUID="5d2957c7-4985-4bb6-a0d2-a9fd7359e6dc" Jul 2 00:24:13.167289 containerd[1837]: time="2024-07-02T00:24:13.166744486Z" level=info msg="StopPodSandbox for \"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\"" Jul 2 00:24:13.192977 containerd[1837]: time="2024-07-02T00:24:13.192925195Z" level=error msg="StopPodSandbox for \"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\" failed" error="failed to destroy network for sandbox \"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:13.193464 kubelet[3384]: E0702 00:24:13.193162 3384 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" Jul 2 00:24:13.193464 kubelet[3384]: E0702 00:24:13.193209 3384 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5"} Jul 2 00:24:13.193464 kubelet[3384]: E0702 00:24:13.193260 3384 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"681633ee-4999-4e86-b7fb-b802b78615ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:24:13.193464 kubelet[3384]: E0702 00:24:13.193299 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"681633ee-4999-4e86-b7fb-b802b78615ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-586k2" podUID="681633ee-4999-4e86-b7fb-b802b78615ed" Jul 2 00:24:24.164536 containerd[1837]: time="2024-07-02T00:24:24.164465393Z" level=info msg="StopPodSandbox for \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\"" Jul 2 00:24:24.207914 containerd[1837]: time="2024-07-02T00:24:24.207844960Z" level=error msg="StopPodSandbox for \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\" failed" error="failed to destroy network for sandbox \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:24.208471 kubelet[3384]: E0702 00:24:24.208238 3384 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" Jul 2 00:24:24.208471 kubelet[3384]: E0702 00:24:24.208307 3384 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b"} Jul 2 00:24:24.208471 kubelet[3384]: E0702 00:24:24.208359 3384 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5d2957c7-4985-4bb6-a0d2-a9fd7359e6dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:24:24.208471 kubelet[3384]: E0702 00:24:24.208396 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5d2957c7-4985-4bb6-a0d2-a9fd7359e6dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-dxcvb" podUID="5d2957c7-4985-4bb6-a0d2-a9fd7359e6dc" Jul 2 00:24:25.166526 containerd[1837]: time="2024-07-02T00:24:25.165470982Z" level=info msg="StopPodSandbox for \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\"" Jul 2 00:24:25.192278 containerd[1837]: time="2024-07-02T00:24:25.192226294Z" level=error msg="StopPodSandbox for \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\" failed" error="failed to destroy network for sandbox \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:25.192497 kubelet[3384]: E0702 00:24:25.192481 3384 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" Jul 2 00:24:25.192589 kubelet[3384]: E0702 00:24:25.192529 3384 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f"} Jul 2 00:24:25.192589 kubelet[3384]: E0702 00:24:25.192578 3384 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a6d90bf0-0b28-471c-bedf-497b4879ee20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:24:25.192742 kubelet[3384]: E0702 00:24:25.192616 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a6d90bf0-0b28-471c-bedf-497b4879ee20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-684dd7f97c-xvvr6" podUID="a6d90bf0-0b28-471c-bedf-497b4879ee20" Jul 2 00:24:26.167097 containerd[1837]: time="2024-07-02T00:24:26.165282753Z" level=info msg="StopPodSandbox for \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\"" Jul 2 00:24:26.193179 containerd[1837]: time="2024-07-02T00:24:26.193128981Z" level=error msg="StopPodSandbox for \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\" failed" error="failed to destroy network for sandbox \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:26.193423 kubelet[3384]: E0702 00:24:26.193396 3384 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" Jul 2 00:24:26.193835 kubelet[3384]: E0702 00:24:26.193446 3384 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7"} Jul 2 00:24:26.193835 kubelet[3384]: E0702 00:24:26.193492 3384 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cc23b9a9-d7c3-49ee-a8ed-e6e1f5727710\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:24:26.193835 kubelet[3384]: E0702 00:24:26.193532 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cc23b9a9-d7c3-49ee-a8ed-e6e1f5727710\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-c8j52" podUID="cc23b9a9-d7c3-49ee-a8ed-e6e1f5727710" Jul 2 00:24:28.165343 containerd[1837]: time="2024-07-02T00:24:28.164904082Z" level=info msg="StopPodSandbox for \"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\"" Jul 2 00:24:28.194372 containerd[1837]: time="2024-07-02T00:24:28.194313449Z" level=error msg="StopPodSandbox for \"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\" failed" error="failed to destroy network for sandbox \"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:28.194631 kubelet[3384]: E0702 00:24:28.194602 3384 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" Jul 2 00:24:28.195049 kubelet[3384]: E0702 00:24:28.194651 3384 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5"} Jul 2 00:24:28.195049 kubelet[3384]: E0702 00:24:28.194708 3384 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"681633ee-4999-4e86-b7fb-b802b78615ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:24:28.195049 kubelet[3384]: E0702 00:24:28.194750 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"681633ee-4999-4e86-b7fb-b802b78615ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-586k2" podUID="681633ee-4999-4e86-b7fb-b802b78615ed" Jul 2 00:24:38.164629 containerd[1837]: time="2024-07-02T00:24:38.164521624Z" level=info msg="StopPodSandbox for \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\"" Jul 2 00:24:38.166638 containerd[1837]: time="2024-07-02T00:24:38.165965447Z" level=info msg="StopPodSandbox for \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\"" Jul 2 00:24:38.250653 containerd[1837]: time="2024-07-02T00:24:38.250474271Z" level=error msg="StopPodSandbox for \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\" failed" error="failed to destroy network for sandbox \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:38.251783 kubelet[3384]: E0702 00:24:38.251514 3384 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" Jul 2 00:24:38.251783 kubelet[3384]: E0702 00:24:38.251575 3384 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b"} Jul 2 00:24:38.251783 kubelet[3384]: E0702 00:24:38.251627 3384 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5d2957c7-4985-4bb6-a0d2-a9fd7359e6dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:24:38.251783 kubelet[3384]: E0702 00:24:38.251697 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5d2957c7-4985-4bb6-a0d2-a9fd7359e6dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-dxcvb" podUID="5d2957c7-4985-4bb6-a0d2-a9fd7359e6dc" Jul 2 00:24:38.253819 containerd[1837]: time="2024-07-02T00:24:38.253776223Z" level=error msg="StopPodSandbox for \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\" failed" error="failed to destroy network for sandbox \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:38.254108 kubelet[3384]: E0702 00:24:38.254089 3384 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" Jul 2 00:24:38.254404 kubelet[3384]: E0702 00:24:38.254235 3384 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7"} Jul 2 00:24:38.254404 kubelet[3384]: E0702 00:24:38.254287 3384 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cc23b9a9-d7c3-49ee-a8ed-e6e1f5727710\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:24:38.254404 kubelet[3384]: E0702 00:24:38.254326 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cc23b9a9-d7c3-49ee-a8ed-e6e1f5727710\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-c8j52" podUID="cc23b9a9-d7c3-49ee-a8ed-e6e1f5727710" Jul 2 00:24:38.956110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount618729837.mount: Deactivated successfully. Jul 2 00:24:39.013307 containerd[1837]: time="2024-07-02T00:24:39.013251825Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:39.015353 containerd[1837]: time="2024-07-02T00:24:39.015286457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jul 2 00:24:39.020787 containerd[1837]: time="2024-07-02T00:24:39.020730942Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:39.026606 containerd[1837]: time="2024-07-02T00:24:39.026549533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:39.027294 containerd[1837]: time="2024-07-02T00:24:39.027158543Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 40.634142982s" Jul 2 00:24:39.027294 containerd[1837]: time="2024-07-02T00:24:39.027197743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jul 2 00:24:39.043911 containerd[1837]: time="2024-07-02T00:24:39.042595185Z" level=info msg="CreateContainer within sandbox \"921c724a062a420c2b2696601ce662e417bcfe45ac016a94a52c0d3abf9df9e4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 00:24:39.099515 containerd[1837]: time="2024-07-02T00:24:39.099467076Z" level=info msg="CreateContainer within sandbox \"921c724a062a420c2b2696601ce662e417bcfe45ac016a94a52c0d3abf9df9e4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bd88a681e955a3c07adf4d09c8219bb4020e29942ec58b9662193feaacb0e025\"" Jul 2 00:24:39.101012 containerd[1837]: time="2024-07-02T00:24:39.100037285Z" level=info msg="StartContainer for \"bd88a681e955a3c07adf4d09c8219bb4020e29942ec58b9662193feaacb0e025\"" Jul 2 00:24:39.155987 containerd[1837]: time="2024-07-02T00:24:39.155938661Z" level=info msg="StartContainer for \"bd88a681e955a3c07adf4d09c8219bb4020e29942ec58b9662193feaacb0e025\" returns successfully" Jul 2 00:24:39.168605 containerd[1837]: time="2024-07-02T00:24:39.167753546Z" level=info msg="StopPodSandbox for \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\"" Jul 2 00:24:39.208153 containerd[1837]: time="2024-07-02T00:24:39.207499469Z" level=error msg="StopPodSandbox for \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\" failed" error="failed to destroy network for sandbox \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:39.208594 kubelet[3384]: E0702 00:24:39.208565 3384 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" Jul 2 00:24:39.208764 kubelet[3384]: E0702 00:24:39.208625 3384 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f"} Jul 2 00:24:39.208764 kubelet[3384]: E0702 00:24:39.208700 3384 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a6d90bf0-0b28-471c-bedf-497b4879ee20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:24:39.208764 kubelet[3384]: E0702 00:24:39.208745 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a6d90bf0-0b28-471c-bedf-497b4879ee20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-684dd7f97c-xvvr6" podUID="a6d90bf0-0b28-471c-bedf-497b4879ee20" Jul 2 00:24:39.580844 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 00:24:39.581006 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 00:24:40.166583 containerd[1837]: time="2024-07-02T00:24:40.165260778Z" level=info msg="StopPodSandbox for \"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\"" Jul 2 00:24:40.211071 kubelet[3384]: I0702 00:24:40.210652 3384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-npkjr" podStartSLOduration=2.489592538 podCreationTimestamp="2024-07-02 00:23:26 +0000 UTC" firstStartedPulling="2024-07-02 00:23:27.3066065 +0000 UTC m=+30.231765109" lastFinishedPulling="2024-07-02 00:24:39.02760545 +0000 UTC m=+101.952764059" observedRunningTime="2024-07-02 00:24:39.504856529 +0000 UTC m=+102.430015238" watchObservedRunningTime="2024-07-02 00:24:40.210591488 +0000 UTC m=+103.135750097" Jul 2 00:24:40.241536 containerd[1837]: 2024-07-02 00:24:40.209 [INFO][4966] k8s.go 608: Cleaning up netns ContainerID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" Jul 2 00:24:40.241536 containerd[1837]: 2024-07-02 00:24:40.210 [INFO][4966] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" iface="eth0" netns="/var/run/netns/cni-8c1ff557-1ed3-34ea-a81d-aba92523586a" Jul 2 00:24:40.241536 containerd[1837]: 2024-07-02 00:24:40.210 [INFO][4966] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" iface="eth0" netns="/var/run/netns/cni-8c1ff557-1ed3-34ea-a81d-aba92523586a" Jul 2 00:24:40.241536 containerd[1837]: 2024-07-02 00:24:40.210 [INFO][4966] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" iface="eth0" netns="/var/run/netns/cni-8c1ff557-1ed3-34ea-a81d-aba92523586a" Jul 2 00:24:40.241536 containerd[1837]: 2024-07-02 00:24:40.210 [INFO][4966] k8s.go 615: Releasing IP address(es) ContainerID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" Jul 2 00:24:40.241536 containerd[1837]: 2024-07-02 00:24:40.210 [INFO][4966] utils.go 188: Calico CNI releasing IP address ContainerID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" Jul 2 00:24:40.241536 containerd[1837]: 2024-07-02 00:24:40.232 [INFO][4972] ipam_plugin.go 411: Releasing address using handleID ContainerID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" HandleID="k8s-pod-network.32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-csi--node--driver--586k2-eth0" Jul 2 00:24:40.241536 containerd[1837]: 2024-07-02 00:24:40.233 [INFO][4972] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:40.241536 containerd[1837]: 2024-07-02 00:24:40.233 [INFO][4972] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:40.241536 containerd[1837]: 2024-07-02 00:24:40.237 [WARNING][4972] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" HandleID="k8s-pod-network.32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-csi--node--driver--586k2-eth0" Jul 2 00:24:40.241536 containerd[1837]: 2024-07-02 00:24:40.237 [INFO][4972] ipam_plugin.go 439: Releasing address using workloadID ContainerID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" HandleID="k8s-pod-network.32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-csi--node--driver--586k2-eth0" Jul 2 00:24:40.241536 containerd[1837]: 2024-07-02 00:24:40.239 [INFO][4972] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:40.241536 containerd[1837]: 2024-07-02 00:24:40.240 [INFO][4966] k8s.go 621: Teardown processing complete. ContainerID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" Jul 2 00:24:40.245315 containerd[1837]: time="2024-07-02T00:24:40.241698676Z" level=info msg="TearDown network for sandbox \"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\" successfully" Jul 2 00:24:40.245315 containerd[1837]: time="2024-07-02T00:24:40.241733176Z" level=info msg="StopPodSandbox for \"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\" returns successfully" Jul 2 00:24:40.245315 containerd[1837]: time="2024-07-02T00:24:40.242718592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-586k2,Uid:681633ee-4999-4e86-b7fb-b802b78615ed,Namespace:calico-system,Attempt:1,}" Jul 2 00:24:40.247351 systemd[1]: run-netns-cni\x2d8c1ff557\x2d1ed3\x2d34ea\x2da81d\x2daba92523586a.mount: Deactivated successfully. Jul 2 00:24:40.384540 systemd-networkd[1407]: cali1b39a402ba8: Link UP Jul 2 00:24:40.385603 systemd-networkd[1407]: cali1b39a402ba8: Gained carrier Jul 2 00:24:40.402224 containerd[1837]: 2024-07-02 00:24:40.312 [INFO][4982] utils.go 100: File /var/lib/calico/mtu does not exist Jul 2 00:24:40.402224 containerd[1837]: 2024-07-02 00:24:40.321 [INFO][4982] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--a--106c6d4ee2-k8s-csi--node--driver--586k2-eth0 csi-node-driver- calico-system 681633ee-4999-4e86-b7fb-b802b78615ed 920 0 2024-07-02 00:23:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3975.1.1-a-106c6d4ee2 csi-node-driver-586k2 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali1b39a402ba8 [] []}} ContainerID="d799e3b29b7d159db05cc19ec660ae34865bd3870ed1aa8c561cc1ec666b2f97" Namespace="calico-system" Pod="csi-node-driver-586k2" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-csi--node--driver--586k2-" Jul 2 00:24:40.402224 containerd[1837]: 2024-07-02 00:24:40.321 [INFO][4982] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d799e3b29b7d159db05cc19ec660ae34865bd3870ed1aa8c561cc1ec666b2f97" Namespace="calico-system" Pod="csi-node-driver-586k2" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-csi--node--driver--586k2-eth0" Jul 2 00:24:40.402224 containerd[1837]: 2024-07-02 00:24:40.348 [INFO][4989] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d799e3b29b7d159db05cc19ec660ae34865bd3870ed1aa8c561cc1ec666b2f97" HandleID="k8s-pod-network.d799e3b29b7d159db05cc19ec660ae34865bd3870ed1aa8c561cc1ec666b2f97" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-csi--node--driver--586k2-eth0" Jul 2 00:24:40.402224 containerd[1837]: 2024-07-02 00:24:40.356 [INFO][4989] ipam_plugin.go 264: Auto assigning IP ContainerID="d799e3b29b7d159db05cc19ec660ae34865bd3870ed1aa8c561cc1ec666b2f97" HandleID="k8s-pod-network.d799e3b29b7d159db05cc19ec660ae34865bd3870ed1aa8c561cc1ec666b2f97" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-csi--node--driver--586k2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318220), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.1.1-a-106c6d4ee2", "pod":"csi-node-driver-586k2", "timestamp":"2024-07-02 00:24:40.348718253 +0000 UTC"}, Hostname:"ci-3975.1.1-a-106c6d4ee2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:24:40.402224 containerd[1837]: 2024-07-02 00:24:40.356 [INFO][4989] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:40.402224 containerd[1837]: 2024-07-02 00:24:40.356 [INFO][4989] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:40.402224 containerd[1837]: 2024-07-02 00:24:40.356 [INFO][4989] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-a-106c6d4ee2' Jul 2 00:24:40.402224 containerd[1837]: 2024-07-02 00:24:40.357 [INFO][4989] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d799e3b29b7d159db05cc19ec660ae34865bd3870ed1aa8c561cc1ec666b2f97" host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:40.402224 containerd[1837]: 2024-07-02 00:24:40.360 [INFO][4989] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:40.402224 containerd[1837]: 2024-07-02 00:24:40.363 [INFO][4989] ipam.go 489: Trying affinity for 192.168.14.128/26 host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:40.402224 containerd[1837]: 2024-07-02 00:24:40.365 [INFO][4989] ipam.go 155: Attempting to load block cidr=192.168.14.128/26 host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:40.402224 containerd[1837]: 2024-07-02 00:24:40.366 [INFO][4989] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.14.128/26 host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:40.402224 containerd[1837]: 2024-07-02 00:24:40.367 [INFO][4989] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.14.128/26 handle="k8s-pod-network.d799e3b29b7d159db05cc19ec660ae34865bd3870ed1aa8c561cc1ec666b2f97" host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:40.402224 containerd[1837]: 2024-07-02 00:24:40.368 [INFO][4989] ipam.go 1685: Creating new handle: k8s-pod-network.d799e3b29b7d159db05cc19ec660ae34865bd3870ed1aa8c561cc1ec666b2f97 Jul 2 00:24:40.402224 containerd[1837]: 2024-07-02 00:24:40.370 [INFO][4989] ipam.go 1203: Writing block in order to claim IPs block=192.168.14.128/26 handle="k8s-pod-network.d799e3b29b7d159db05cc19ec660ae34865bd3870ed1aa8c561cc1ec666b2f97" host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:40.402224 containerd[1837]: 2024-07-02 00:24:40.374 [INFO][4989] ipam.go 1216: Successfully claimed IPs: [192.168.14.129/26] block=192.168.14.128/26 handle="k8s-pod-network.d799e3b29b7d159db05cc19ec660ae34865bd3870ed1aa8c561cc1ec666b2f97" host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:40.402224 containerd[1837]: 2024-07-02 00:24:40.374 [INFO][4989] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.14.129/26] handle="k8s-pod-network.d799e3b29b7d159db05cc19ec660ae34865bd3870ed1aa8c561cc1ec666b2f97" host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:40.402224 containerd[1837]: 2024-07-02 00:24:40.374 [INFO][4989] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:40.402224 containerd[1837]: 2024-07-02 00:24:40.374 [INFO][4989] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.14.129/26] IPv6=[] ContainerID="d799e3b29b7d159db05cc19ec660ae34865bd3870ed1aa8c561cc1ec666b2f97" HandleID="k8s-pod-network.d799e3b29b7d159db05cc19ec660ae34865bd3870ed1aa8c561cc1ec666b2f97" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-csi--node--driver--586k2-eth0" Jul 2 00:24:40.404398 containerd[1837]: 2024-07-02 00:24:40.376 [INFO][4982] k8s.go 386: Populated endpoint ContainerID="d799e3b29b7d159db05cc19ec660ae34865bd3870ed1aa8c561cc1ec666b2f97" Namespace="calico-system" Pod="csi-node-driver-586k2" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-csi--node--driver--586k2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--106c6d4ee2-k8s-csi--node--driver--586k2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"681633ee-4999-4e86-b7fb-b802b78615ed", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-106c6d4ee2", ContainerID:"", Pod:"csi-node-driver-586k2", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.14.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali1b39a402ba8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:40.404398 containerd[1837]: 2024-07-02 00:24:40.376 [INFO][4982] k8s.go 387: Calico CNI using IPs: [192.168.14.129/32] ContainerID="d799e3b29b7d159db05cc19ec660ae34865bd3870ed1aa8c561cc1ec666b2f97" Namespace="calico-system" Pod="csi-node-driver-586k2" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-csi--node--driver--586k2-eth0" Jul 2 00:24:40.404398 containerd[1837]: 2024-07-02 00:24:40.376 [INFO][4982] dataplane_linux.go 68: Setting the host side veth name to cali1b39a402ba8 ContainerID="d799e3b29b7d159db05cc19ec660ae34865bd3870ed1aa8c561cc1ec666b2f97" Namespace="calico-system" Pod="csi-node-driver-586k2" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-csi--node--driver--586k2-eth0" Jul 2 00:24:40.404398 containerd[1837]: 2024-07-02 00:24:40.384 [INFO][4982] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d799e3b29b7d159db05cc19ec660ae34865bd3870ed1aa8c561cc1ec666b2f97" Namespace="calico-system" Pod="csi-node-driver-586k2" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-csi--node--driver--586k2-eth0" Jul 2 00:24:40.404398 containerd[1837]: 2024-07-02 00:24:40.384 [INFO][4982] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d799e3b29b7d159db05cc19ec660ae34865bd3870ed1aa8c561cc1ec666b2f97" Namespace="calico-system" Pod="csi-node-driver-586k2" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-csi--node--driver--586k2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--106c6d4ee2-k8s-csi--node--driver--586k2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"681633ee-4999-4e86-b7fb-b802b78615ed", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-106c6d4ee2", ContainerID:"d799e3b29b7d159db05cc19ec660ae34865bd3870ed1aa8c561cc1ec666b2f97", Pod:"csi-node-driver-586k2", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.14.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali1b39a402ba8", MAC:"fe:b8:0f:83:9d:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:40.404398 containerd[1837]: 2024-07-02 00:24:40.398 [INFO][4982] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d799e3b29b7d159db05cc19ec660ae34865bd3870ed1aa8c561cc1ec666b2f97" Namespace="calico-system" Pod="csi-node-driver-586k2" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-csi--node--driver--586k2-eth0" Jul 2 00:24:40.429401 containerd[1837]: time="2024-07-02T00:24:40.428449302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:40.429401 containerd[1837]: time="2024-07-02T00:24:40.429212414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:40.429401 containerd[1837]: time="2024-07-02T00:24:40.429261515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:40.429401 containerd[1837]: time="2024-07-02T00:24:40.429283115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:40.469883 containerd[1837]: time="2024-07-02T00:24:40.469845851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-586k2,Uid:681633ee-4999-4e86-b7fb-b802b78615ed,Namespace:calico-system,Attempt:1,} returns sandbox id \"d799e3b29b7d159db05cc19ec660ae34865bd3870ed1aa8c561cc1ec666b2f97\"" Jul 2 00:24:40.471398 containerd[1837]: time="2024-07-02T00:24:40.471358375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 00:24:41.333195 systemd-networkd[1407]: vxlan.calico: Link UP Jul 2 00:24:41.333203 systemd-networkd[1407]: vxlan.calico: Gained carrier Jul 2 00:24:42.171894 systemd-networkd[1407]: cali1b39a402ba8: Gained IPv6LL Jul 2 00:24:42.620948 systemd-networkd[1407]: vxlan.calico: Gained IPv6LL Jul 2 00:24:42.709745 containerd[1837]: time="2024-07-02T00:24:42.709696728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:42.712425 containerd[1837]: time="2024-07-02T00:24:42.712364770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jul 2 00:24:42.719403 containerd[1837]: time="2024-07-02T00:24:42.719339479Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:42.723861 containerd[1837]: time="2024-07-02T00:24:42.723803948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:42.725062 containerd[1837]: time="2024-07-02T00:24:42.724553860Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 2.253153985s" Jul 2 00:24:42.725062 containerd[1837]: time="2024-07-02T00:24:42.724593961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jul 2 00:24:42.726375 containerd[1837]: time="2024-07-02T00:24:42.726327688Z" level=info msg="CreateContainer within sandbox \"d799e3b29b7d159db05cc19ec660ae34865bd3870ed1aa8c561cc1ec666b2f97\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 00:24:42.770716 containerd[1837]: time="2024-07-02T00:24:42.770654680Z" level=info msg="CreateContainer within sandbox \"d799e3b29b7d159db05cc19ec660ae34865bd3870ed1aa8c561cc1ec666b2f97\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b205d1deda575a5b014e8c7a2e20e4c68a27acbd22450a005fef37fcbff08686\"" Jul 2 00:24:42.771186 containerd[1837]: time="2024-07-02T00:24:42.771151987Z" level=info msg="StartContainer for \"b205d1deda575a5b014e8c7a2e20e4c68a27acbd22450a005fef37fcbff08686\"" Jul 2 00:24:42.833168 containerd[1837]: time="2024-07-02T00:24:42.833125555Z" level=info msg="StartContainer for \"b205d1deda575a5b014e8c7a2e20e4c68a27acbd22450a005fef37fcbff08686\" returns successfully" Jul 2 00:24:42.834228 containerd[1837]: time="2024-07-02T00:24:42.834205671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 00:24:44.768793 containerd[1837]: time="2024-07-02T00:24:44.768738165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:44.771077 containerd[1837]: time="2024-07-02T00:24:44.771021300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jul 2 00:24:44.776293 containerd[1837]: time="2024-07-02T00:24:44.776235582Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:44.780859 containerd[1837]: time="2024-07-02T00:24:44.780798453Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:44.781598 containerd[1837]: time="2024-07-02T00:24:44.781443863Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 1.947070489s" Jul 2 00:24:44.781598 containerd[1837]: time="2024-07-02T00:24:44.781485264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jul 2 00:24:44.783434 containerd[1837]: time="2024-07-02T00:24:44.783404594Z" level=info msg="CreateContainer within sandbox \"d799e3b29b7d159db05cc19ec660ae34865bd3870ed1aa8c561cc1ec666b2f97\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 00:24:44.822919 containerd[1837]: time="2024-07-02T00:24:44.822872310Z" level=info msg="CreateContainer within sandbox \"d799e3b29b7d159db05cc19ec660ae34865bd3870ed1aa8c561cc1ec666b2f97\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b6c9d8b90950844a8ed32fa774f0424546ca9cb2ecbe36fdb912dec83aff4bcb\"" Jul 2 00:24:44.823402 containerd[1837]: time="2024-07-02T00:24:44.823370317Z" level=info msg="StartContainer for \"b6c9d8b90950844a8ed32fa774f0424546ca9cb2ecbe36fdb912dec83aff4bcb\"" Jul 2 00:24:44.885316 containerd[1837]: time="2024-07-02T00:24:44.885259083Z" level=info msg="StartContainer for \"b6c9d8b90950844a8ed32fa774f0424546ca9cb2ecbe36fdb912dec83aff4bcb\" returns successfully" Jul 2 00:24:45.314646 kubelet[3384]: I0702 00:24:45.314592 3384 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 00:24:45.314646 kubelet[3384]: I0702 00:24:45.314635 3384 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 00:24:52.165213 containerd[1837]: time="2024-07-02T00:24:52.165154725Z" level=info msg="StopPodSandbox for \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\"" Jul 2 00:24:52.166246 containerd[1837]: time="2024-07-02T00:24:52.165164425Z" level=info msg="StopPodSandbox for \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\"" Jul 2 00:24:52.226682 kubelet[3384]: I0702 00:24:52.224104 3384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-586k2" podStartSLOduration=89.913180745 podCreationTimestamp="2024-07-02 00:23:18 +0000 UTC" firstStartedPulling="2024-07-02 00:24:40.47105737 +0000 UTC m=+103.396215979" lastFinishedPulling="2024-07-02 00:24:44.78192287 +0000 UTC m=+107.707081479" observedRunningTime="2024-07-02 00:24:45.511434056 +0000 UTC m=+108.436592665" watchObservedRunningTime="2024-07-02 00:24:52.224046245 +0000 UTC m=+115.149204854" Jul 2 00:24:52.319453 containerd[1837]: 2024-07-02 00:24:52.224 [INFO][5379] k8s.go 608: Cleaning up netns ContainerID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" Jul 2 00:24:52.319453 containerd[1837]: 2024-07-02 00:24:52.225 [INFO][5379] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" iface="eth0" netns="/var/run/netns/cni-df354051-1f9e-67f3-ac70-27118a482802" Jul 2 00:24:52.319453 containerd[1837]: 2024-07-02 00:24:52.225 [INFO][5379] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" iface="eth0" netns="/var/run/netns/cni-df354051-1f9e-67f3-ac70-27118a482802" Jul 2 00:24:52.319453 containerd[1837]: 2024-07-02 00:24:52.225 [INFO][5379] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" iface="eth0" netns="/var/run/netns/cni-df354051-1f9e-67f3-ac70-27118a482802" Jul 2 00:24:52.319453 containerd[1837]: 2024-07-02 00:24:52.226 [INFO][5379] k8s.go 615: Releasing IP address(es) ContainerID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" Jul 2 00:24:52.319453 containerd[1837]: 2024-07-02 00:24:52.226 [INFO][5379] utils.go 188: Calico CNI releasing IP address ContainerID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" Jul 2 00:24:52.319453 containerd[1837]: 2024-07-02 00:24:52.267 [INFO][5400] ipam_plugin.go 411: Releasing address using handleID ContainerID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" HandleID="k8s-pod-network.fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-calico--kube--controllers--684dd7f97c--xvvr6-eth0" Jul 2 00:24:52.319453 containerd[1837]: 2024-07-02 00:24:52.269 [INFO][5400] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:52.319453 containerd[1837]: 2024-07-02 00:24:52.269 [INFO][5400] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:52.319453 containerd[1837]: 2024-07-02 00:24:52.300 [WARNING][5400] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" HandleID="k8s-pod-network.fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-calico--kube--controllers--684dd7f97c--xvvr6-eth0" Jul 2 00:24:52.319453 containerd[1837]: 2024-07-02 00:24:52.300 [INFO][5400] ipam_plugin.go 439: Releasing address using workloadID ContainerID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" HandleID="k8s-pod-network.fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-calico--kube--controllers--684dd7f97c--xvvr6-eth0" Jul 2 00:24:52.319453 containerd[1837]: 2024-07-02 00:24:52.306 [INFO][5400] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:52.319453 containerd[1837]: 2024-07-02 00:24:52.312 [INFO][5379] k8s.go 621: Teardown processing complete. ContainerID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" Jul 2 00:24:52.329686 containerd[1837]: time="2024-07-02T00:24:52.323844903Z" level=info msg="TearDown network for sandbox \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\" successfully" Jul 2 00:24:52.329686 containerd[1837]: time="2024-07-02T00:24:52.323900904Z" level=info msg="StopPodSandbox for \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\" returns successfully" Jul 2 00:24:52.329686 containerd[1837]: time="2024-07-02T00:24:52.324946720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-684dd7f97c-xvvr6,Uid:a6d90bf0-0b28-471c-bedf-497b4879ee20,Namespace:calico-system,Attempt:1,}" Jul 2 00:24:52.327087 systemd[1]: run-netns-cni\x2ddf354051\x2d1f9e\x2d67f3\x2dac70\x2d27118a482802.mount: Deactivated successfully. Jul 2 00:24:52.357853 containerd[1837]: 2024-07-02 00:24:52.231 [INFO][5391] k8s.go 608: Cleaning up netns ContainerID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" Jul 2 00:24:52.357853 containerd[1837]: 2024-07-02 00:24:52.231 [INFO][5391] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" iface="eth0" netns="/var/run/netns/cni-36a42b25-1f50-cbc6-cca7-42eb8e956e1b" Jul 2 00:24:52.357853 containerd[1837]: 2024-07-02 00:24:52.232 [INFO][5391] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" iface="eth0" netns="/var/run/netns/cni-36a42b25-1f50-cbc6-cca7-42eb8e956e1b" Jul 2 00:24:52.357853 containerd[1837]: 2024-07-02 00:24:52.232 [INFO][5391] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" iface="eth0" netns="/var/run/netns/cni-36a42b25-1f50-cbc6-cca7-42eb8e956e1b" Jul 2 00:24:52.357853 containerd[1837]: 2024-07-02 00:24:52.232 [INFO][5391] k8s.go 615: Releasing IP address(es) ContainerID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" Jul 2 00:24:52.357853 containerd[1837]: 2024-07-02 00:24:52.232 [INFO][5391] utils.go 188: Calico CNI releasing IP address ContainerID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" Jul 2 00:24:52.357853 containerd[1837]: 2024-07-02 00:24:52.284 [INFO][5404] ipam_plugin.go 411: Releasing address using handleID ContainerID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" HandleID="k8s-pod-network.a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--c8j52-eth0" Jul 2 00:24:52.357853 containerd[1837]: 2024-07-02 00:24:52.285 [INFO][5404] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:52.357853 containerd[1837]: 2024-07-02 00:24:52.304 [INFO][5404] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:52.357853 containerd[1837]: 2024-07-02 00:24:52.339 [WARNING][5404] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" HandleID="k8s-pod-network.a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--c8j52-eth0" Jul 2 00:24:52.357853 containerd[1837]: 2024-07-02 00:24:52.339 [INFO][5404] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" HandleID="k8s-pod-network.a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--c8j52-eth0" Jul 2 00:24:52.357853 containerd[1837]: 2024-07-02 00:24:52.349 [INFO][5404] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:52.357853 containerd[1837]: 2024-07-02 00:24:52.350 [INFO][5391] k8s.go 621: Teardown processing complete. ContainerID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" Jul 2 00:24:52.362700 containerd[1837]: time="2024-07-02T00:24:52.361461291Z" level=info msg="TearDown network for sandbox \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\" successfully" Jul 2 00:24:52.362700 containerd[1837]: time="2024-07-02T00:24:52.361503791Z" level=info msg="StopPodSandbox for \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\" returns successfully" Jul 2 00:24:52.364346 containerd[1837]: time="2024-07-02T00:24:52.364312635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-c8j52,Uid:cc23b9a9-d7c3-49ee-a8ed-e6e1f5727710,Namespace:kube-system,Attempt:1,}" Jul 2 00:24:52.369666 systemd[1]: run-netns-cni\x2d36a42b25\x2d1f50\x2dcbc6\x2dcca7\x2d42eb8e956e1b.mount: Deactivated successfully. Jul 2 00:24:52.628909 systemd-networkd[1407]: cali323308ebf36: Link UP Jul 2 00:24:52.632238 systemd-networkd[1407]: cali323308ebf36: Gained carrier Jul 2 00:24:52.663391 containerd[1837]: 2024-07-02 00:24:52.494 [INFO][5414] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--a--106c6d4ee2-k8s-calico--kube--controllers--684dd7f97c--xvvr6-eth0 calico-kube-controllers-684dd7f97c- calico-system a6d90bf0-0b28-471c-bedf-497b4879ee20 953 0 2024-07-02 00:23:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:684dd7f97c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3975.1.1-a-106c6d4ee2 calico-kube-controllers-684dd7f97c-xvvr6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali323308ebf36 [] []}} ContainerID="a39ecc918981a778ea88d95756758f04a9953304b0dda2ffddac0dd880dc305f" Namespace="calico-system" Pod="calico-kube-controllers-684dd7f97c-xvvr6" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-calico--kube--controllers--684dd7f97c--xvvr6-" Jul 2 00:24:52.663391 containerd[1837]: 2024-07-02 00:24:52.494 [INFO][5414] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a39ecc918981a778ea88d95756758f04a9953304b0dda2ffddac0dd880dc305f" Namespace="calico-system" Pod="calico-kube-controllers-684dd7f97c-xvvr6" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-calico--kube--controllers--684dd7f97c--xvvr6-eth0" Jul 2 00:24:52.663391 containerd[1837]: 2024-07-02 00:24:52.569 [INFO][5436] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a39ecc918981a778ea88d95756758f04a9953304b0dda2ffddac0dd880dc305f" HandleID="k8s-pod-network.a39ecc918981a778ea88d95756758f04a9953304b0dda2ffddac0dd880dc305f" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-calico--kube--controllers--684dd7f97c--xvvr6-eth0" Jul 2 00:24:52.663391 containerd[1837]: 2024-07-02 00:24:52.582 [INFO][5436] ipam_plugin.go 264: Auto assigning IP ContainerID="a39ecc918981a778ea88d95756758f04a9953304b0dda2ffddac0dd880dc305f" HandleID="k8s-pod-network.a39ecc918981a778ea88d95756758f04a9953304b0dda2ffddac0dd880dc305f" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-calico--kube--controllers--684dd7f97c--xvvr6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003183f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.1.1-a-106c6d4ee2", "pod":"calico-kube-controllers-684dd7f97c-xvvr6", "timestamp":"2024-07-02 00:24:52.569938547 +0000 UTC"}, Hostname:"ci-3975.1.1-a-106c6d4ee2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:24:52.663391 containerd[1837]: 2024-07-02 00:24:52.582 [INFO][5436] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:52.663391 containerd[1837]: 2024-07-02 00:24:52.582 [INFO][5436] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:52.663391 containerd[1837]: 2024-07-02 00:24:52.582 [INFO][5436] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-a-106c6d4ee2' Jul 2 00:24:52.663391 containerd[1837]: 2024-07-02 00:24:52.584 [INFO][5436] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a39ecc918981a778ea88d95756758f04a9953304b0dda2ffddac0dd880dc305f" host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:52.663391 containerd[1837]: 2024-07-02 00:24:52.590 [INFO][5436] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:52.663391 containerd[1837]: 2024-07-02 00:24:52.603 [INFO][5436] ipam.go 489: Trying affinity for 192.168.14.128/26 host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:52.663391 containerd[1837]: 2024-07-02 00:24:52.606 [INFO][5436] ipam.go 155: Attempting to load block cidr=192.168.14.128/26 host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:52.663391 containerd[1837]: 2024-07-02 00:24:52.609 [INFO][5436] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.14.128/26 host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:52.663391 containerd[1837]: 2024-07-02 00:24:52.609 [INFO][5436] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.14.128/26 handle="k8s-pod-network.a39ecc918981a778ea88d95756758f04a9953304b0dda2ffddac0dd880dc305f" host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:52.663391 containerd[1837]: 2024-07-02 00:24:52.611 [INFO][5436] ipam.go 1685: Creating new handle: k8s-pod-network.a39ecc918981a778ea88d95756758f04a9953304b0dda2ffddac0dd880dc305f Jul 2 00:24:52.663391 containerd[1837]: 2024-07-02 00:24:52.615 [INFO][5436] ipam.go 1203: Writing block in order to claim IPs block=192.168.14.128/26 handle="k8s-pod-network.a39ecc918981a778ea88d95756758f04a9953304b0dda2ffddac0dd880dc305f" host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:52.663391 containerd[1837]: 2024-07-02 00:24:52.618 [INFO][5436] ipam.go 1216: Successfully claimed IPs: [192.168.14.130/26] block=192.168.14.128/26 handle="k8s-pod-network.a39ecc918981a778ea88d95756758f04a9953304b0dda2ffddac0dd880dc305f" host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:52.663391 containerd[1837]: 2024-07-02 00:24:52.619 [INFO][5436] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.14.130/26] handle="k8s-pod-network.a39ecc918981a778ea88d95756758f04a9953304b0dda2ffddac0dd880dc305f" host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:52.663391 containerd[1837]: 2024-07-02 00:24:52.619 [INFO][5436] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:52.663391 containerd[1837]: 2024-07-02 00:24:52.619 [INFO][5436] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.14.130/26] IPv6=[] ContainerID="a39ecc918981a778ea88d95756758f04a9953304b0dda2ffddac0dd880dc305f" HandleID="k8s-pod-network.a39ecc918981a778ea88d95756758f04a9953304b0dda2ffddac0dd880dc305f" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-calico--kube--controllers--684dd7f97c--xvvr6-eth0" Jul 2 00:24:52.666443 containerd[1837]: 2024-07-02 00:24:52.622 [INFO][5414] k8s.go 386: Populated endpoint ContainerID="a39ecc918981a778ea88d95756758f04a9953304b0dda2ffddac0dd880dc305f" Namespace="calico-system" Pod="calico-kube-controllers-684dd7f97c-xvvr6" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-calico--kube--controllers--684dd7f97c--xvvr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--106c6d4ee2-k8s-calico--kube--controllers--684dd7f97c--xvvr6-eth0", GenerateName:"calico-kube-controllers-684dd7f97c-", Namespace:"calico-system", SelfLink:"", UID:"a6d90bf0-0b28-471c-bedf-497b4879ee20", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"684dd7f97c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-106c6d4ee2", ContainerID:"", Pod:"calico-kube-controllers-684dd7f97c-xvvr6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.14.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali323308ebf36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:52.666443 containerd[1837]: 2024-07-02 00:24:52.623 [INFO][5414] k8s.go 387: Calico CNI using IPs: [192.168.14.130/32] ContainerID="a39ecc918981a778ea88d95756758f04a9953304b0dda2ffddac0dd880dc305f" Namespace="calico-system" Pod="calico-kube-controllers-684dd7f97c-xvvr6" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-calico--kube--controllers--684dd7f97c--xvvr6-eth0" Jul 2 00:24:52.666443 containerd[1837]: 2024-07-02 00:24:52.623 [INFO][5414] dataplane_linux.go 68: Setting the host side veth name to cali323308ebf36 ContainerID="a39ecc918981a778ea88d95756758f04a9953304b0dda2ffddac0dd880dc305f" Namespace="calico-system" Pod="calico-kube-controllers-684dd7f97c-xvvr6" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-calico--kube--controllers--684dd7f97c--xvvr6-eth0" Jul 2 00:24:52.666443 containerd[1837]: 2024-07-02 00:24:52.634 [INFO][5414] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a39ecc918981a778ea88d95756758f04a9953304b0dda2ffddac0dd880dc305f" Namespace="calico-system" Pod="calico-kube-controllers-684dd7f97c-xvvr6" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-calico--kube--controllers--684dd7f97c--xvvr6-eth0" Jul 2 00:24:52.666443 containerd[1837]: 2024-07-02 00:24:52.635 [INFO][5414] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a39ecc918981a778ea88d95756758f04a9953304b0dda2ffddac0dd880dc305f" Namespace="calico-system" Pod="calico-kube-controllers-684dd7f97c-xvvr6" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-calico--kube--controllers--684dd7f97c--xvvr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--106c6d4ee2-k8s-calico--kube--controllers--684dd7f97c--xvvr6-eth0", GenerateName:"calico-kube-controllers-684dd7f97c-", Namespace:"calico-system", SelfLink:"", UID:"a6d90bf0-0b28-471c-bedf-497b4879ee20", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"684dd7f97c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-106c6d4ee2", ContainerID:"a39ecc918981a778ea88d95756758f04a9953304b0dda2ffddac0dd880dc305f", Pod:"calico-kube-controllers-684dd7f97c-xvvr6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.14.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali323308ebf36", MAC:"22:59:86:4b:bc:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:52.666443 containerd[1837]: 2024-07-02 00:24:52.658 [INFO][5414] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a39ecc918981a778ea88d95756758f04a9953304b0dda2ffddac0dd880dc305f" Namespace="calico-system" Pod="calico-kube-controllers-684dd7f97c-xvvr6" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-calico--kube--controllers--684dd7f97c--xvvr6-eth0" Jul 2 00:24:52.673870 systemd-networkd[1407]: calid9e5ac61c87: Link UP Jul 2 00:24:52.675175 systemd-networkd[1407]: calid9e5ac61c87: Gained carrier Jul 2 00:24:52.697503 containerd[1837]: 2024-07-02 00:24:52.536 [INFO][5423] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--c8j52-eth0 coredns-5dd5756b68- kube-system cc23b9a9-d7c3-49ee-a8ed-e6e1f5727710 954 0 2024-07-02 00:23:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.1.1-a-106c6d4ee2 coredns-5dd5756b68-c8j52 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid9e5ac61c87 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e93a036f2fd4c35a8a7f2bf21c4de0feba0114d7e44f6c9e2f40677f3f54d9a3" Namespace="kube-system" Pod="coredns-5dd5756b68-c8j52" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--c8j52-" Jul 2 00:24:52.697503 containerd[1837]: 2024-07-02 00:24:52.536 [INFO][5423] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e93a036f2fd4c35a8a7f2bf21c4de0feba0114d7e44f6c9e2f40677f3f54d9a3" Namespace="kube-system" Pod="coredns-5dd5756b68-c8j52" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--c8j52-eth0" Jul 2 00:24:52.697503 containerd[1837]: 2024-07-02 00:24:52.609 [INFO][5442] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e93a036f2fd4c35a8a7f2bf21c4de0feba0114d7e44f6c9e2f40677f3f54d9a3" HandleID="k8s-pod-network.e93a036f2fd4c35a8a7f2bf21c4de0feba0114d7e44f6c9e2f40677f3f54d9a3" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--c8j52-eth0" Jul 2 00:24:52.697503 containerd[1837]: 2024-07-02 00:24:52.621 [INFO][5442] ipam_plugin.go 264: Auto assigning IP ContainerID="e93a036f2fd4c35a8a7f2bf21c4de0feba0114d7e44f6c9e2f40677f3f54d9a3" HandleID="k8s-pod-network.e93a036f2fd4c35a8a7f2bf21c4de0feba0114d7e44f6c9e2f40677f3f54d9a3" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--c8j52-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332270), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.1.1-a-106c6d4ee2", "pod":"coredns-5dd5756b68-c8j52", "timestamp":"2024-07-02 00:24:52.60918736 +0000 UTC"}, Hostname:"ci-3975.1.1-a-106c6d4ee2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:24:52.697503 containerd[1837]: 2024-07-02 00:24:52.621 [INFO][5442] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:52.697503 containerd[1837]: 2024-07-02 00:24:52.621 [INFO][5442] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:52.697503 containerd[1837]: 2024-07-02 00:24:52.622 [INFO][5442] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-a-106c6d4ee2' Jul 2 00:24:52.697503 containerd[1837]: 2024-07-02 00:24:52.623 [INFO][5442] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e93a036f2fd4c35a8a7f2bf21c4de0feba0114d7e44f6c9e2f40677f3f54d9a3" host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:52.697503 containerd[1837]: 2024-07-02 00:24:52.631 [INFO][5442] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:52.697503 containerd[1837]: 2024-07-02 00:24:52.636 [INFO][5442] ipam.go 489: Trying affinity for 192.168.14.128/26 host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:52.697503 containerd[1837]: 2024-07-02 00:24:52.640 [INFO][5442] ipam.go 155: Attempting to load block cidr=192.168.14.128/26 host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:52.697503 containerd[1837]: 2024-07-02 00:24:52.648 [INFO][5442] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.14.128/26 host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:52.697503 containerd[1837]: 2024-07-02 00:24:52.648 [INFO][5442] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.14.128/26 handle="k8s-pod-network.e93a036f2fd4c35a8a7f2bf21c4de0feba0114d7e44f6c9e2f40677f3f54d9a3" host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:52.697503 containerd[1837]: 2024-07-02 00:24:52.649 [INFO][5442] ipam.go 1685: Creating new handle: k8s-pod-network.e93a036f2fd4c35a8a7f2bf21c4de0feba0114d7e44f6c9e2f40677f3f54d9a3 Jul 2 00:24:52.697503 containerd[1837]: 2024-07-02 00:24:52.658 [INFO][5442] ipam.go 1203: Writing block in order to claim IPs block=192.168.14.128/26 handle="k8s-pod-network.e93a036f2fd4c35a8a7f2bf21c4de0feba0114d7e44f6c9e2f40677f3f54d9a3" host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:52.697503 containerd[1837]: 2024-07-02 00:24:52.665 [INFO][5442] ipam.go 1216: Successfully claimed IPs: [192.168.14.131/26] block=192.168.14.128/26 handle="k8s-pod-network.e93a036f2fd4c35a8a7f2bf21c4de0feba0114d7e44f6c9e2f40677f3f54d9a3" host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:52.697503 containerd[1837]: 2024-07-02 00:24:52.665 [INFO][5442] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.14.131/26] handle="k8s-pod-network.e93a036f2fd4c35a8a7f2bf21c4de0feba0114d7e44f6c9e2f40677f3f54d9a3" host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:52.697503 containerd[1837]: 2024-07-02 00:24:52.666 [INFO][5442] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:52.697503 containerd[1837]: 2024-07-02 00:24:52.666 [INFO][5442] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.14.131/26] IPv6=[] ContainerID="e93a036f2fd4c35a8a7f2bf21c4de0feba0114d7e44f6c9e2f40677f3f54d9a3" HandleID="k8s-pod-network.e93a036f2fd4c35a8a7f2bf21c4de0feba0114d7e44f6c9e2f40677f3f54d9a3" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--c8j52-eth0" Jul 2 00:24:52.699063 containerd[1837]: 2024-07-02 00:24:52.668 [INFO][5423] k8s.go 386: Populated endpoint ContainerID="e93a036f2fd4c35a8a7f2bf21c4de0feba0114d7e44f6c9e2f40677f3f54d9a3" Namespace="kube-system" Pod="coredns-5dd5756b68-c8j52" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--c8j52-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--c8j52-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"cc23b9a9-d7c3-49ee-a8ed-e6e1f5727710", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-106c6d4ee2", ContainerID:"", Pod:"coredns-5dd5756b68-c8j52", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid9e5ac61c87", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:52.699063 containerd[1837]: 2024-07-02 00:24:52.668 [INFO][5423] k8s.go 387: Calico CNI using IPs: [192.168.14.131/32] ContainerID="e93a036f2fd4c35a8a7f2bf21c4de0feba0114d7e44f6c9e2f40677f3f54d9a3" Namespace="kube-system" Pod="coredns-5dd5756b68-c8j52" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--c8j52-eth0" Jul 2 00:24:52.699063 containerd[1837]: 2024-07-02 00:24:52.669 [INFO][5423] dataplane_linux.go 68: Setting the host side veth name to calid9e5ac61c87 ContainerID="e93a036f2fd4c35a8a7f2bf21c4de0feba0114d7e44f6c9e2f40677f3f54d9a3" Namespace="kube-system" Pod="coredns-5dd5756b68-c8j52" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--c8j52-eth0" Jul 2 00:24:52.699063 containerd[1837]: 2024-07-02 00:24:52.676 [INFO][5423] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e93a036f2fd4c35a8a7f2bf21c4de0feba0114d7e44f6c9e2f40677f3f54d9a3" Namespace="kube-system" Pod="coredns-5dd5756b68-c8j52" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--c8j52-eth0" Jul 2 00:24:52.699063 containerd[1837]: 2024-07-02 00:24:52.676 [INFO][5423] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e93a036f2fd4c35a8a7f2bf21c4de0feba0114d7e44f6c9e2f40677f3f54d9a3" Namespace="kube-system" Pod="coredns-5dd5756b68-c8j52" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--c8j52-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--c8j52-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"cc23b9a9-d7c3-49ee-a8ed-e6e1f5727710", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-106c6d4ee2", ContainerID:"e93a036f2fd4c35a8a7f2bf21c4de0feba0114d7e44f6c9e2f40677f3f54d9a3", Pod:"coredns-5dd5756b68-c8j52", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid9e5ac61c87", MAC:"f6:0e:fb:91:f7:1c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:52.699063 containerd[1837]: 2024-07-02 00:24:52.690 [INFO][5423] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e93a036f2fd4c35a8a7f2bf21c4de0feba0114d7e44f6c9e2f40677f3f54d9a3" Namespace="kube-system" Pod="coredns-5dd5756b68-c8j52" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--c8j52-eth0" Jul 2 00:24:52.732481 containerd[1837]: time="2024-07-02T00:24:52.730230451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:52.732481 containerd[1837]: time="2024-07-02T00:24:52.731812975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:52.732481 containerd[1837]: time="2024-07-02T00:24:52.731846276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:52.732481 containerd[1837]: time="2024-07-02T00:24:52.731862076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:52.796504 containerd[1837]: time="2024-07-02T00:24:52.796197281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:52.796504 containerd[1837]: time="2024-07-02T00:24:52.796265982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:52.796504 containerd[1837]: time="2024-07-02T00:24:52.796287682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:52.796504 containerd[1837]: time="2024-07-02T00:24:52.796300682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:52.867706 containerd[1837]: time="2024-07-02T00:24:52.867608396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-684dd7f97c-xvvr6,Uid:a6d90bf0-0b28-471c-bedf-497b4879ee20,Namespace:calico-system,Attempt:1,} returns sandbox id \"a39ecc918981a778ea88d95756758f04a9953304b0dda2ffddac0dd880dc305f\"" Jul 2 00:24:52.872144 containerd[1837]: time="2024-07-02T00:24:52.871848762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jul 2 00:24:52.918216 containerd[1837]: time="2024-07-02T00:24:52.917864181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-c8j52,Uid:cc23b9a9-d7c3-49ee-a8ed-e6e1f5727710,Namespace:kube-system,Attempt:1,} returns sandbox id \"e93a036f2fd4c35a8a7f2bf21c4de0feba0114d7e44f6c9e2f40677f3f54d9a3\"" Jul 2 00:24:52.922437 containerd[1837]: time="2024-07-02T00:24:52.922399752Z" level=info msg="CreateContainer within sandbox \"e93a036f2fd4c35a8a7f2bf21c4de0feba0114d7e44f6c9e2f40677f3f54d9a3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:24:52.962818 containerd[1837]: time="2024-07-02T00:24:52.962776783Z" level=info msg="CreateContainer within sandbox \"e93a036f2fd4c35a8a7f2bf21c4de0feba0114d7e44f6c9e2f40677f3f54d9a3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b4948dc81d8dbf7debb06338a154798793b1d1f3e5173a17507f3fb09b371dc9\"" Jul 2 00:24:52.963397 containerd[1837]: time="2024-07-02T00:24:52.963297391Z" level=info msg="StartContainer for \"b4948dc81d8dbf7debb06338a154798793b1d1f3e5173a17507f3fb09b371dc9\"" Jul 2 00:24:53.012454 containerd[1837]: time="2024-07-02T00:24:53.012301656Z" level=info msg="StartContainer for \"b4948dc81d8dbf7debb06338a154798793b1d1f3e5173a17507f3fb09b371dc9\" returns successfully" Jul 2 00:24:53.166139 containerd[1837]: time="2024-07-02T00:24:53.166043457Z" level=info msg="StopPodSandbox for \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\"" Jul 2 00:24:53.235713 containerd[1837]: 2024-07-02 00:24:53.206 [INFO][5607] k8s.go 608: Cleaning up netns ContainerID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" Jul 2 00:24:53.235713 containerd[1837]: 2024-07-02 00:24:53.206 [INFO][5607] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" iface="eth0" netns="/var/run/netns/cni-a846ce3b-1d84-f186-84f7-d339f788acc0" Jul 2 00:24:53.235713 containerd[1837]: 2024-07-02 00:24:53.206 [INFO][5607] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" iface="eth0" netns="/var/run/netns/cni-a846ce3b-1d84-f186-84f7-d339f788acc0" Jul 2 00:24:53.235713 containerd[1837]: 2024-07-02 00:24:53.206 [INFO][5607] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" iface="eth0" netns="/var/run/netns/cni-a846ce3b-1d84-f186-84f7-d339f788acc0" Jul 2 00:24:53.235713 containerd[1837]: 2024-07-02 00:24:53.207 [INFO][5607] k8s.go 615: Releasing IP address(es) ContainerID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" Jul 2 00:24:53.235713 containerd[1837]: 2024-07-02 00:24:53.207 [INFO][5607] utils.go 188: Calico CNI releasing IP address ContainerID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" Jul 2 00:24:53.235713 containerd[1837]: 2024-07-02 00:24:53.226 [INFO][5613] ipam_plugin.go 411: Releasing address using handleID ContainerID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" HandleID="k8s-pod-network.245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--dxcvb-eth0" Jul 2 00:24:53.235713 containerd[1837]: 2024-07-02 00:24:53.226 [INFO][5613] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:53.235713 containerd[1837]: 2024-07-02 00:24:53.226 [INFO][5613] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:53.235713 containerd[1837]: 2024-07-02 00:24:53.232 [WARNING][5613] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" HandleID="k8s-pod-network.245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--dxcvb-eth0" Jul 2 00:24:53.235713 containerd[1837]: 2024-07-02 00:24:53.232 [INFO][5613] ipam_plugin.go 439: Releasing address using workloadID ContainerID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" HandleID="k8s-pod-network.245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--dxcvb-eth0" Jul 2 00:24:53.235713 containerd[1837]: 2024-07-02 00:24:53.233 [INFO][5613] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:53.235713 containerd[1837]: 2024-07-02 00:24:53.234 [INFO][5607] k8s.go 621: Teardown processing complete. ContainerID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" Jul 2 00:24:53.235713 containerd[1837]: time="2024-07-02T00:24:53.235582944Z" level=info msg="TearDown network for sandbox \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\" successfully" Jul 2 00:24:53.235713 containerd[1837]: time="2024-07-02T00:24:53.235622444Z" level=info msg="StopPodSandbox for \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\" returns successfully" Jul 2 00:24:53.237045 containerd[1837]: time="2024-07-02T00:24:53.236419557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-dxcvb,Uid:5d2957c7-4985-4bb6-a0d2-a9fd7359e6dc,Namespace:kube-system,Attempt:1,}" Jul 2 00:24:53.340574 systemd[1]: run-netns-cni\x2da846ce3b\x2d1d84\x2df186\x2d84f7\x2dd339f788acc0.mount: Deactivated successfully. Jul 2 00:24:53.392811 systemd-networkd[1407]: cali5cdba5e3b60: Link UP Jul 2 00:24:53.393022 systemd-networkd[1407]: cali5cdba5e3b60: Gained carrier Jul 2 00:24:53.410785 containerd[1837]: 2024-07-02 00:24:53.313 [INFO][5620] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--dxcvb-eth0 coredns-5dd5756b68- kube-system 5d2957c7-4985-4bb6-a0d2-a9fd7359e6dc 968 0 2024-07-02 00:23:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.1.1-a-106c6d4ee2 coredns-5dd5756b68-dxcvb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5cdba5e3b60 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4b91a0b8e584af5f3b72c33514728f5290f7036a1665afd815f807caca2526b2" Namespace="kube-system" Pod="coredns-5dd5756b68-dxcvb" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--dxcvb-" Jul 2 00:24:53.410785 containerd[1837]: 2024-07-02 00:24:53.313 [INFO][5620] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4b91a0b8e584af5f3b72c33514728f5290f7036a1665afd815f807caca2526b2" Namespace="kube-system" Pod="coredns-5dd5756b68-dxcvb" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--dxcvb-eth0" Jul 2 00:24:53.410785 containerd[1837]: 2024-07-02 00:24:53.359 [INFO][5630] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b91a0b8e584af5f3b72c33514728f5290f7036a1665afd815f807caca2526b2" HandleID="k8s-pod-network.4b91a0b8e584af5f3b72c33514728f5290f7036a1665afd815f807caca2526b2" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--dxcvb-eth0" Jul 2 00:24:53.410785 containerd[1837]: 2024-07-02 00:24:53.366 [INFO][5630] ipam_plugin.go 264: Auto assigning IP ContainerID="4b91a0b8e584af5f3b72c33514728f5290f7036a1665afd815f807caca2526b2" HandleID="k8s-pod-network.4b91a0b8e584af5f3b72c33514728f5290f7036a1665afd815f807caca2526b2" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--dxcvb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002edea0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.1.1-a-106c6d4ee2", "pod":"coredns-5dd5756b68-dxcvb", "timestamp":"2024-07-02 00:24:53.358995671 +0000 UTC"}, Hostname:"ci-3975.1.1-a-106c6d4ee2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:24:53.410785 containerd[1837]: 2024-07-02 00:24:53.366 [INFO][5630] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:53.410785 containerd[1837]: 2024-07-02 00:24:53.366 [INFO][5630] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:53.410785 containerd[1837]: 2024-07-02 00:24:53.366 [INFO][5630] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-a-106c6d4ee2' Jul 2 00:24:53.410785 containerd[1837]: 2024-07-02 00:24:53.368 [INFO][5630] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4b91a0b8e584af5f3b72c33514728f5290f7036a1665afd815f807caca2526b2" host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:53.410785 containerd[1837]: 2024-07-02 00:24:53.372 [INFO][5630] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:53.410785 containerd[1837]: 2024-07-02 00:24:53.376 [INFO][5630] ipam.go 489: Trying affinity for 192.168.14.128/26 host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:53.410785 containerd[1837]: 2024-07-02 00:24:53.377 [INFO][5630] ipam.go 155: Attempting to load block cidr=192.168.14.128/26 host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:53.410785 containerd[1837]: 2024-07-02 00:24:53.379 [INFO][5630] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.14.128/26 host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:53.410785 containerd[1837]: 2024-07-02 00:24:53.379 [INFO][5630] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.14.128/26 handle="k8s-pod-network.4b91a0b8e584af5f3b72c33514728f5290f7036a1665afd815f807caca2526b2" host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:53.410785 containerd[1837]: 2024-07-02 00:24:53.381 [INFO][5630] ipam.go 1685: Creating new handle: k8s-pod-network.4b91a0b8e584af5f3b72c33514728f5290f7036a1665afd815f807caca2526b2 Jul 2 00:24:53.410785 containerd[1837]: 2024-07-02 00:24:53.383 [INFO][5630] ipam.go 1203: Writing block in order to claim IPs block=192.168.14.128/26 handle="k8s-pod-network.4b91a0b8e584af5f3b72c33514728f5290f7036a1665afd815f807caca2526b2" host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:53.410785 containerd[1837]: 2024-07-02 00:24:53.388 [INFO][5630] ipam.go 1216: Successfully claimed IPs: [192.168.14.132/26] block=192.168.14.128/26 handle="k8s-pod-network.4b91a0b8e584af5f3b72c33514728f5290f7036a1665afd815f807caca2526b2" host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:53.410785 containerd[1837]: 2024-07-02 00:24:53.388 [INFO][5630] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.14.132/26] handle="k8s-pod-network.4b91a0b8e584af5f3b72c33514728f5290f7036a1665afd815f807caca2526b2" host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:24:53.410785 containerd[1837]: 2024-07-02 00:24:53.388 [INFO][5630] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:53.410785 containerd[1837]: 2024-07-02 00:24:53.388 [INFO][5630] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.14.132/26] IPv6=[] ContainerID="4b91a0b8e584af5f3b72c33514728f5290f7036a1665afd815f807caca2526b2" HandleID="k8s-pod-network.4b91a0b8e584af5f3b72c33514728f5290f7036a1665afd815f807caca2526b2" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--dxcvb-eth0" Jul 2 00:24:53.412716 containerd[1837]: 2024-07-02 00:24:53.389 [INFO][5620] k8s.go 386: Populated endpoint ContainerID="4b91a0b8e584af5f3b72c33514728f5290f7036a1665afd815f807caca2526b2" Namespace="kube-system" Pod="coredns-5dd5756b68-dxcvb" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--dxcvb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--dxcvb-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"5d2957c7-4985-4bb6-a0d2-a9fd7359e6dc", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-106c6d4ee2", ContainerID:"", Pod:"coredns-5dd5756b68-dxcvb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5cdba5e3b60", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:53.412716 containerd[1837]: 2024-07-02 00:24:53.390 [INFO][5620] k8s.go 387: Calico CNI using IPs: [192.168.14.132/32] ContainerID="4b91a0b8e584af5f3b72c33514728f5290f7036a1665afd815f807caca2526b2" Namespace="kube-system" Pod="coredns-5dd5756b68-dxcvb" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--dxcvb-eth0" Jul 2 00:24:53.412716 containerd[1837]: 2024-07-02 00:24:53.390 [INFO][5620] dataplane_linux.go 68: Setting the host side veth name to cali5cdba5e3b60 ContainerID="4b91a0b8e584af5f3b72c33514728f5290f7036a1665afd815f807caca2526b2" Namespace="kube-system" Pod="coredns-5dd5756b68-dxcvb" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--dxcvb-eth0" Jul 2 00:24:53.412716 containerd[1837]: 2024-07-02 00:24:53.392 [INFO][5620] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4b91a0b8e584af5f3b72c33514728f5290f7036a1665afd815f807caca2526b2" Namespace="kube-system" Pod="coredns-5dd5756b68-dxcvb" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--dxcvb-eth0" Jul 2 00:24:53.412716 containerd[1837]: 2024-07-02 00:24:53.394 [INFO][5620] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4b91a0b8e584af5f3b72c33514728f5290f7036a1665afd815f807caca2526b2" Namespace="kube-system" Pod="coredns-5dd5756b68-dxcvb" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--dxcvb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--dxcvb-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"5d2957c7-4985-4bb6-a0d2-a9fd7359e6dc", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-106c6d4ee2", ContainerID:"4b91a0b8e584af5f3b72c33514728f5290f7036a1665afd815f807caca2526b2", Pod:"coredns-5dd5756b68-dxcvb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5cdba5e3b60", MAC:"a6:ef:82:e7:ba:ad", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:53.412716 containerd[1837]: 2024-07-02 00:24:53.407 [INFO][5620] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4b91a0b8e584af5f3b72c33514728f5290f7036a1665afd815f807caca2526b2" Namespace="kube-system" Pod="coredns-5dd5756b68-dxcvb" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--dxcvb-eth0" Jul 2 00:24:53.442564 containerd[1837]: time="2024-07-02T00:24:53.442418174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:53.442564 containerd[1837]: time="2024-07-02T00:24:53.442477675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:53.442564 containerd[1837]: time="2024-07-02T00:24:53.442513776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:53.442564 containerd[1837]: time="2024-07-02T00:24:53.442530276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:53.477373 systemd[1]: run-containerd-runc-k8s.io-4b91a0b8e584af5f3b72c33514728f5290f7036a1665afd815f807caca2526b2-runc.1WIt8w.mount: Deactivated successfully. Jul 2 00:24:53.543403 containerd[1837]: time="2024-07-02T00:24:53.543316550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-dxcvb,Uid:5d2957c7-4985-4bb6-a0d2-a9fd7359e6dc,Namespace:kube-system,Attempt:1,} returns sandbox id \"4b91a0b8e584af5f3b72c33514728f5290f7036a1665afd815f807caca2526b2\"" Jul 2 00:24:53.547206 containerd[1837]: time="2024-07-02T00:24:53.546820105Z" level=info msg="CreateContainer within sandbox \"4b91a0b8e584af5f3b72c33514728f5290f7036a1665afd815f807caca2526b2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:24:53.567954 kubelet[3384]: I0702 00:24:53.566784 3384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-c8j52" podStartSLOduration=104.566738416 podCreationTimestamp="2024-07-02 00:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:24:53.565363894 +0000 UTC m=+116.490522503" watchObservedRunningTime="2024-07-02 00:24:53.566738416 +0000 UTC m=+116.491897025" Jul 2 00:24:53.610916 containerd[1837]: time="2024-07-02T00:24:53.610868205Z" level=info msg="CreateContainer within sandbox \"4b91a0b8e584af5f3b72c33514728f5290f7036a1665afd815f807caca2526b2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"19a8f37cc9e8cf8cdd16481cc202a584742ebbf6ba9dc2d14a5ec3b1f5cca063\"" Jul 2 00:24:53.612685 containerd[1837]: time="2024-07-02T00:24:53.612632833Z" level=info msg="StartContainer for \"19a8f37cc9e8cf8cdd16481cc202a584742ebbf6ba9dc2d14a5ec3b1f5cca063\"" Jul 2 00:24:53.700815 containerd[1837]: time="2024-07-02T00:24:53.700594906Z" level=info msg="StartContainer for \"19a8f37cc9e8cf8cdd16481cc202a584742ebbf6ba9dc2d14a5ec3b1f5cca063\" returns successfully" Jul 2 00:24:53.819885 systemd-networkd[1407]: cali323308ebf36: Gained IPv6LL Jul 2 00:24:54.460767 systemd-networkd[1407]: calid9e5ac61c87: Gained IPv6LL Jul 2 00:24:54.560765 kubelet[3384]: I0702 00:24:54.559607 3384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-dxcvb" podStartSLOduration=105.559564123 podCreationTimestamp="2024-07-02 00:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:24:54.559095315 +0000 UTC m=+117.484253924" watchObservedRunningTime="2024-07-02 00:24:54.559564123 +0000 UTC m=+117.484722732" Jul 2 00:24:55.291966 systemd-networkd[1407]: cali5cdba5e3b60: Gained IPv6LL Jul 2 00:24:55.353796 containerd[1837]: time="2024-07-02T00:24:55.353746127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:55.355892 containerd[1837]: time="2024-07-02T00:24:55.355827559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jul 2 00:24:55.360218 containerd[1837]: time="2024-07-02T00:24:55.360187527Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:55.365025 containerd[1837]: time="2024-07-02T00:24:55.364962402Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:55.365729 containerd[1837]: time="2024-07-02T00:24:55.365691213Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 2.49380115s" Jul 2 00:24:55.365835 containerd[1837]: time="2024-07-02T00:24:55.365734914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jul 2 00:24:55.381702 containerd[1837]: time="2024-07-02T00:24:55.381485060Z" level=info msg="CreateContainer within sandbox \"a39ecc918981a778ea88d95756758f04a9953304b0dda2ffddac0dd880dc305f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 2 00:24:55.420169 containerd[1837]: time="2024-07-02T00:24:55.420127163Z" level=info msg="CreateContainer within sandbox \"a39ecc918981a778ea88d95756758f04a9953304b0dda2ffddac0dd880dc305f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8368a7d2b939271ae58a2e1ff76a779dd29e14aced8cf673f7af030b07c0913c\"" Jul 2 00:24:55.422086 containerd[1837]: time="2024-07-02T00:24:55.420732373Z" level=info msg="StartContainer for \"8368a7d2b939271ae58a2e1ff76a779dd29e14aced8cf673f7af030b07c0913c\"" Jul 2 00:24:55.494909 containerd[1837]: time="2024-07-02T00:24:55.494858431Z" level=info msg="StartContainer for \"8368a7d2b939271ae58a2e1ff76a779dd29e14aced8cf673f7af030b07c0913c\" returns successfully" Jul 2 00:24:55.657344 kubelet[3384]: I0702 00:24:55.656965 3384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-684dd7f97c-xvvr6" podStartSLOduration=95.161181482 podCreationTimestamp="2024-07-02 00:23:18 +0000 UTC" firstStartedPulling="2024-07-02 00:24:52.870644844 +0000 UTC m=+115.795803553" lastFinishedPulling="2024-07-02 00:24:55.366175821 +0000 UTC m=+118.291334530" observedRunningTime="2024-07-02 00:24:55.575899296 +0000 UTC m=+118.501057905" watchObservedRunningTime="2024-07-02 00:24:55.656712459 +0000 UTC m=+118.581871168" Jul 2 00:24:57.107520 systemd[1]: run-containerd-runc-k8s.io-8368a7d2b939271ae58a2e1ff76a779dd29e14aced8cf673f7af030b07c0913c-runc.zprQC7.mount: Deactivated successfully. Jul 2 00:24:57.535265 containerd[1837]: time="2024-07-02T00:24:57.535142497Z" level=info msg="StopPodSandbox for \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\"" Jul 2 00:24:57.611507 containerd[1837]: 2024-07-02 00:24:57.584 [WARNING][5825] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--c8j52-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"cc23b9a9-d7c3-49ee-a8ed-e6e1f5727710", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-106c6d4ee2", ContainerID:"e93a036f2fd4c35a8a7f2bf21c4de0feba0114d7e44f6c9e2f40677f3f54d9a3", Pod:"coredns-5dd5756b68-c8j52", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid9e5ac61c87", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:57.611507 containerd[1837]: 2024-07-02 00:24:57.584 [INFO][5825] k8s.go 608: Cleaning up netns ContainerID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" Jul 2 00:24:57.611507 containerd[1837]: 2024-07-02 00:24:57.584 [INFO][5825] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" iface="eth0" netns="" Jul 2 00:24:57.611507 containerd[1837]: 2024-07-02 00:24:57.584 [INFO][5825] k8s.go 615: Releasing IP address(es) ContainerID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" Jul 2 00:24:57.611507 containerd[1837]: 2024-07-02 00:24:57.584 [INFO][5825] utils.go 188: Calico CNI releasing IP address ContainerID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" Jul 2 00:24:57.611507 containerd[1837]: 2024-07-02 00:24:57.602 [INFO][5833] ipam_plugin.go 411: Releasing address using handleID ContainerID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" HandleID="k8s-pod-network.a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--c8j52-eth0" Jul 2 00:24:57.611507 containerd[1837]: 2024-07-02 00:24:57.602 [INFO][5833] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:57.611507 containerd[1837]: 2024-07-02 00:24:57.603 [INFO][5833] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:57.611507 containerd[1837]: 2024-07-02 00:24:57.608 [WARNING][5833] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" HandleID="k8s-pod-network.a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--c8j52-eth0" Jul 2 00:24:57.611507 containerd[1837]: 2024-07-02 00:24:57.608 [INFO][5833] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" HandleID="k8s-pod-network.a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--c8j52-eth0" Jul 2 00:24:57.611507 containerd[1837]: 2024-07-02 00:24:57.609 [INFO][5833] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:57.611507 containerd[1837]: 2024-07-02 00:24:57.610 [INFO][5825] k8s.go 621: Teardown processing complete. ContainerID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" Jul 2 00:24:57.612498 containerd[1837]: time="2024-07-02T00:24:57.611559691Z" level=info msg="TearDown network for sandbox \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\" successfully" Jul 2 00:24:57.612498 containerd[1837]: time="2024-07-02T00:24:57.611594092Z" level=info msg="StopPodSandbox for \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\" returns successfully" Jul 2 00:24:57.612498 containerd[1837]: time="2024-07-02T00:24:57.612188501Z" level=info msg="RemovePodSandbox for \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\"" Jul 2 00:24:57.612498 containerd[1837]: time="2024-07-02T00:24:57.612227701Z" level=info msg="Forcibly stopping sandbox \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\"" Jul 2 00:24:57.668973 containerd[1837]: 2024-07-02 00:24:57.642 [WARNING][5851] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--c8j52-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"cc23b9a9-d7c3-49ee-a8ed-e6e1f5727710", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-106c6d4ee2", ContainerID:"e93a036f2fd4c35a8a7f2bf21c4de0feba0114d7e44f6c9e2f40677f3f54d9a3", Pod:"coredns-5dd5756b68-c8j52", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid9e5ac61c87", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:57.668973 containerd[1837]: 2024-07-02 00:24:57.642 [INFO][5851] k8s.go 608: Cleaning up netns ContainerID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" Jul 2 00:24:57.668973 containerd[1837]: 2024-07-02 00:24:57.642 [INFO][5851] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" iface="eth0" netns="" Jul 2 00:24:57.668973 containerd[1837]: 2024-07-02 00:24:57.642 [INFO][5851] k8s.go 615: Releasing IP address(es) ContainerID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" Jul 2 00:24:57.668973 containerd[1837]: 2024-07-02 00:24:57.642 [INFO][5851] utils.go 188: Calico CNI releasing IP address ContainerID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" Jul 2 00:24:57.668973 containerd[1837]: 2024-07-02 00:24:57.660 [INFO][5857] ipam_plugin.go 411: Releasing address using handleID ContainerID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" HandleID="k8s-pod-network.a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--c8j52-eth0" Jul 2 00:24:57.668973 containerd[1837]: 2024-07-02 00:24:57.661 [INFO][5857] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:57.668973 containerd[1837]: 2024-07-02 00:24:57.661 [INFO][5857] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:57.668973 containerd[1837]: 2024-07-02 00:24:57.665 [WARNING][5857] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" HandleID="k8s-pod-network.a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--c8j52-eth0" Jul 2 00:24:57.668973 containerd[1837]: 2024-07-02 00:24:57.665 [INFO][5857] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" HandleID="k8s-pod-network.a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--c8j52-eth0" Jul 2 00:24:57.668973 containerd[1837]: 2024-07-02 00:24:57.667 [INFO][5857] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:57.668973 containerd[1837]: 2024-07-02 00:24:57.667 [INFO][5851] k8s.go 621: Teardown processing complete. ContainerID="a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7" Jul 2 00:24:57.669621 containerd[1837]: time="2024-07-02T00:24:57.669029589Z" level=info msg="TearDown network for sandbox \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\" successfully" Jul 2 00:24:57.679351 containerd[1837]: time="2024-07-02T00:24:57.679302149Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:24:57.679580 containerd[1837]: time="2024-07-02T00:24:57.679374150Z" level=info msg="RemovePodSandbox \"a615028a841d3331206041c2f1c0dd980d785ab98cef09e6c74c93bba6628eb7\" returns successfully" Jul 2 00:24:57.679909 containerd[1837]: time="2024-07-02T00:24:57.679866158Z" level=info msg="StopPodSandbox for \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\"" Jul 2 00:24:57.741415 containerd[1837]: 2024-07-02 00:24:57.713 [WARNING][5875] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--106c6d4ee2-k8s-calico--kube--controllers--684dd7f97c--xvvr6-eth0", GenerateName:"calico-kube-controllers-684dd7f97c-", Namespace:"calico-system", SelfLink:"", UID:"a6d90bf0-0b28-471c-bedf-497b4879ee20", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"684dd7f97c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-106c6d4ee2", ContainerID:"a39ecc918981a778ea88d95756758f04a9953304b0dda2ffddac0dd880dc305f", Pod:"calico-kube-controllers-684dd7f97c-xvvr6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.14.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali323308ebf36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:57.741415 containerd[1837]: 2024-07-02 00:24:57.713 [INFO][5875] k8s.go 608: Cleaning up netns ContainerID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" Jul 2 00:24:57.741415 containerd[1837]: 2024-07-02 00:24:57.713 [INFO][5875] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" iface="eth0" netns="" Jul 2 00:24:57.741415 containerd[1837]: 2024-07-02 00:24:57.713 [INFO][5875] k8s.go 615: Releasing IP address(es) ContainerID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" Jul 2 00:24:57.741415 containerd[1837]: 2024-07-02 00:24:57.713 [INFO][5875] utils.go 188: Calico CNI releasing IP address ContainerID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" Jul 2 00:24:57.741415 containerd[1837]: 2024-07-02 00:24:57.733 [INFO][5881] ipam_plugin.go 411: Releasing address using handleID ContainerID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" HandleID="k8s-pod-network.fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-calico--kube--controllers--684dd7f97c--xvvr6-eth0" Jul 2 00:24:57.741415 containerd[1837]: 2024-07-02 00:24:57.733 [INFO][5881] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:57.741415 containerd[1837]: 2024-07-02 00:24:57.733 [INFO][5881] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:57.741415 containerd[1837]: 2024-07-02 00:24:57.738 [WARNING][5881] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" HandleID="k8s-pod-network.fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-calico--kube--controllers--684dd7f97c--xvvr6-eth0" Jul 2 00:24:57.741415 containerd[1837]: 2024-07-02 00:24:57.738 [INFO][5881] ipam_plugin.go 439: Releasing address using workloadID ContainerID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" HandleID="k8s-pod-network.fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-calico--kube--controllers--684dd7f97c--xvvr6-eth0" Jul 2 00:24:57.741415 containerd[1837]: 2024-07-02 00:24:57.739 [INFO][5881] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:57.741415 containerd[1837]: 2024-07-02 00:24:57.740 [INFO][5875] k8s.go 621: Teardown processing complete. ContainerID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" Jul 2 00:24:57.742261 containerd[1837]: time="2024-07-02T00:24:57.741457820Z" level=info msg="TearDown network for sandbox \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\" successfully" Jul 2 00:24:57.742261 containerd[1837]: time="2024-07-02T00:24:57.741490320Z" level=info msg="StopPodSandbox for \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\" returns successfully" Jul 2 00:24:57.742261 containerd[1837]: time="2024-07-02T00:24:57.741997728Z" level=info msg="RemovePodSandbox for \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\"" Jul 2 00:24:57.742261 containerd[1837]: time="2024-07-02T00:24:57.742035029Z" level=info msg="Forcibly stopping sandbox \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\"" Jul 2 00:24:57.806609 containerd[1837]: 2024-07-02 00:24:57.773 [WARNING][5899] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--106c6d4ee2-k8s-calico--kube--controllers--684dd7f97c--xvvr6-eth0", GenerateName:"calico-kube-controllers-684dd7f97c-", Namespace:"calico-system", SelfLink:"", UID:"a6d90bf0-0b28-471c-bedf-497b4879ee20", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"684dd7f97c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-106c6d4ee2", ContainerID:"a39ecc918981a778ea88d95756758f04a9953304b0dda2ffddac0dd880dc305f", Pod:"calico-kube-controllers-684dd7f97c-xvvr6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.14.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali323308ebf36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:57.806609 containerd[1837]: 2024-07-02 00:24:57.774 [INFO][5899] k8s.go 608: Cleaning up netns ContainerID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" Jul 2 00:24:57.806609 containerd[1837]: 2024-07-02 00:24:57.774 [INFO][5899] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" iface="eth0" netns="" Jul 2 00:24:57.806609 containerd[1837]: 2024-07-02 00:24:57.774 [INFO][5899] k8s.go 615: Releasing IP address(es) ContainerID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" Jul 2 00:24:57.806609 containerd[1837]: 2024-07-02 00:24:57.774 [INFO][5899] utils.go 188: Calico CNI releasing IP address ContainerID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" Jul 2 00:24:57.806609 containerd[1837]: 2024-07-02 00:24:57.796 [INFO][5905] ipam_plugin.go 411: Releasing address using handleID ContainerID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" HandleID="k8s-pod-network.fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-calico--kube--controllers--684dd7f97c--xvvr6-eth0" Jul 2 00:24:57.806609 containerd[1837]: 2024-07-02 00:24:57.796 [INFO][5905] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:57.806609 containerd[1837]: 2024-07-02 00:24:57.796 [INFO][5905] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:57.806609 containerd[1837]: 2024-07-02 00:24:57.802 [WARNING][5905] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" HandleID="k8s-pod-network.fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-calico--kube--controllers--684dd7f97c--xvvr6-eth0" Jul 2 00:24:57.806609 containerd[1837]: 2024-07-02 00:24:57.802 [INFO][5905] ipam_plugin.go 439: Releasing address using workloadID ContainerID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" HandleID="k8s-pod-network.fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-calico--kube--controllers--684dd7f97c--xvvr6-eth0" Jul 2 00:24:57.806609 containerd[1837]: 2024-07-02 00:24:57.804 [INFO][5905] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:57.806609 containerd[1837]: 2024-07-02 00:24:57.805 [INFO][5899] k8s.go 621: Teardown processing complete. ContainerID="fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f" Jul 2 00:24:57.806609 containerd[1837]: time="2024-07-02T00:24:57.806571637Z" level=info msg="TearDown network for sandbox \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\" successfully" Jul 2 00:24:57.814117 containerd[1837]: time="2024-07-02T00:24:57.814068254Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:24:57.814452 containerd[1837]: time="2024-07-02T00:24:57.814137355Z" level=info msg="RemovePodSandbox \"fcf0e444072bbbb706dbaa3a079c80e157fc1cffec8464d9730ab2473a82034f\" returns successfully" Jul 2 00:24:57.814746 containerd[1837]: time="2024-07-02T00:24:57.814713264Z" level=info msg="StopPodSandbox for \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\"" Jul 2 00:24:57.875546 containerd[1837]: 2024-07-02 00:24:57.845 [WARNING][5923] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--dxcvb-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"5d2957c7-4985-4bb6-a0d2-a9fd7359e6dc", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-106c6d4ee2", ContainerID:"4b91a0b8e584af5f3b72c33514728f5290f7036a1665afd815f807caca2526b2", Pod:"coredns-5dd5756b68-dxcvb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5cdba5e3b60", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:57.875546 containerd[1837]: 2024-07-02 00:24:57.845 [INFO][5923] k8s.go 608: Cleaning up netns ContainerID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" Jul 2 00:24:57.875546 containerd[1837]: 2024-07-02 00:24:57.846 [INFO][5923] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" iface="eth0" netns="" Jul 2 00:24:57.875546 containerd[1837]: 2024-07-02 00:24:57.846 [INFO][5923] k8s.go 615: Releasing IP address(es) ContainerID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" Jul 2 00:24:57.875546 containerd[1837]: 2024-07-02 00:24:57.846 [INFO][5923] utils.go 188: Calico CNI releasing IP address ContainerID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" Jul 2 00:24:57.875546 containerd[1837]: 2024-07-02 00:24:57.867 [INFO][5929] ipam_plugin.go 411: Releasing address using handleID ContainerID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" HandleID="k8s-pod-network.245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--dxcvb-eth0" Jul 2 00:24:57.875546 containerd[1837]: 2024-07-02 00:24:57.867 [INFO][5929] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:57.875546 containerd[1837]: 2024-07-02 00:24:57.867 [INFO][5929] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:57.875546 containerd[1837]: 2024-07-02 00:24:57.872 [WARNING][5929] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" HandleID="k8s-pod-network.245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--dxcvb-eth0" Jul 2 00:24:57.875546 containerd[1837]: 2024-07-02 00:24:57.872 [INFO][5929] ipam_plugin.go 439: Releasing address using workloadID ContainerID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" HandleID="k8s-pod-network.245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--dxcvb-eth0" Jul 2 00:24:57.875546 containerd[1837]: 2024-07-02 00:24:57.873 [INFO][5929] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:57.875546 containerd[1837]: 2024-07-02 00:24:57.874 [INFO][5923] k8s.go 621: Teardown processing complete. ContainerID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" Jul 2 00:24:57.876584 containerd[1837]: time="2024-07-02T00:24:57.875588515Z" level=info msg="TearDown network for sandbox \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\" successfully" Jul 2 00:24:57.876584 containerd[1837]: time="2024-07-02T00:24:57.875619815Z" level=info msg="StopPodSandbox for \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\" returns successfully" Jul 2 00:24:57.876584 containerd[1837]: time="2024-07-02T00:24:57.876228125Z" level=info msg="RemovePodSandbox for \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\"" Jul 2 00:24:57.876584 containerd[1837]: time="2024-07-02T00:24:57.876261625Z" level=info msg="Forcibly stopping sandbox \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\"" Jul 2 00:24:57.935945 containerd[1837]: 2024-07-02 00:24:57.907 [WARNING][5947] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--dxcvb-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"5d2957c7-4985-4bb6-a0d2-a9fd7359e6dc", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-106c6d4ee2", ContainerID:"4b91a0b8e584af5f3b72c33514728f5290f7036a1665afd815f807caca2526b2", Pod:"coredns-5dd5756b68-dxcvb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5cdba5e3b60", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:57.935945 containerd[1837]: 2024-07-02 00:24:57.907 [INFO][5947] k8s.go 608: Cleaning up netns ContainerID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" Jul 2 00:24:57.935945 containerd[1837]: 2024-07-02 00:24:57.907 [INFO][5947] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" iface="eth0" netns="" Jul 2 00:24:57.935945 containerd[1837]: 2024-07-02 00:24:57.907 [INFO][5947] k8s.go 615: Releasing IP address(es) ContainerID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" Jul 2 00:24:57.935945 containerd[1837]: 2024-07-02 00:24:57.907 [INFO][5947] utils.go 188: Calico CNI releasing IP address ContainerID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" Jul 2 00:24:57.935945 containerd[1837]: 2024-07-02 00:24:57.926 [INFO][5953] ipam_plugin.go 411: Releasing address using handleID ContainerID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" HandleID="k8s-pod-network.245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--dxcvb-eth0" Jul 2 00:24:57.935945 containerd[1837]: 2024-07-02 00:24:57.927 [INFO][5953] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:57.935945 containerd[1837]: 2024-07-02 00:24:57.927 [INFO][5953] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:57.935945 containerd[1837]: 2024-07-02 00:24:57.932 [WARNING][5953] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" HandleID="k8s-pod-network.245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--dxcvb-eth0" Jul 2 00:24:57.935945 containerd[1837]: 2024-07-02 00:24:57.932 [INFO][5953] ipam_plugin.go 439: Releasing address using workloadID ContainerID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" HandleID="k8s-pod-network.245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-coredns--5dd5756b68--dxcvb-eth0" Jul 2 00:24:57.935945 containerd[1837]: 2024-07-02 00:24:57.933 [INFO][5953] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:57.935945 containerd[1837]: 2024-07-02 00:24:57.934 [INFO][5947] k8s.go 621: Teardown processing complete. ContainerID="245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b" Jul 2 00:24:57.936755 containerd[1837]: time="2024-07-02T00:24:57.935978758Z" level=info msg="TearDown network for sandbox \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\" successfully" Jul 2 00:24:57.954415 containerd[1837]: time="2024-07-02T00:24:57.954368845Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:24:57.954592 containerd[1837]: time="2024-07-02T00:24:57.954439446Z" level=info msg="RemovePodSandbox \"245c585568039eef29b8661149b13f107fbaca18987e57f84c8d1b347ee6f80b\" returns successfully" Jul 2 00:24:57.955065 containerd[1837]: time="2024-07-02T00:24:57.955031256Z" level=info msg="StopPodSandbox for \"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\"" Jul 2 00:24:58.012689 containerd[1837]: 2024-07-02 00:24:57.985 [WARNING][5971] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--106c6d4ee2-k8s-csi--node--driver--586k2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"681633ee-4999-4e86-b7fb-b802b78615ed", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-106c6d4ee2", ContainerID:"d799e3b29b7d159db05cc19ec660ae34865bd3870ed1aa8c561cc1ec666b2f97", Pod:"csi-node-driver-586k2", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.14.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali1b39a402ba8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:58.012689 containerd[1837]: 2024-07-02 00:24:57.986 [INFO][5971] k8s.go 608: Cleaning up netns ContainerID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" Jul 2 00:24:58.012689 containerd[1837]: 2024-07-02 00:24:57.986 [INFO][5971] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" iface="eth0" netns="" Jul 2 00:24:58.012689 containerd[1837]: 2024-07-02 00:24:57.986 [INFO][5971] k8s.go 615: Releasing IP address(es) ContainerID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" Jul 2 00:24:58.012689 containerd[1837]: 2024-07-02 00:24:57.986 [INFO][5971] utils.go 188: Calico CNI releasing IP address ContainerID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" Jul 2 00:24:58.012689 containerd[1837]: 2024-07-02 00:24:58.004 [INFO][5977] ipam_plugin.go 411: Releasing address using handleID ContainerID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" HandleID="k8s-pod-network.32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-csi--node--driver--586k2-eth0" Jul 2 00:24:58.012689 containerd[1837]: 2024-07-02 00:24:58.004 [INFO][5977] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:58.012689 containerd[1837]: 2024-07-02 00:24:58.004 [INFO][5977] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:58.012689 containerd[1837]: 2024-07-02 00:24:58.009 [WARNING][5977] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" HandleID="k8s-pod-network.32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-csi--node--driver--586k2-eth0" Jul 2 00:24:58.012689 containerd[1837]: 2024-07-02 00:24:58.009 [INFO][5977] ipam_plugin.go 439: Releasing address using workloadID ContainerID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" HandleID="k8s-pod-network.32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-csi--node--driver--586k2-eth0" Jul 2 00:24:58.012689 containerd[1837]: 2024-07-02 00:24:58.010 [INFO][5977] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:58.012689 containerd[1837]: 2024-07-02 00:24:58.011 [INFO][5971] k8s.go 621: Teardown processing complete. ContainerID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" Jul 2 00:24:58.013548 containerd[1837]: time="2024-07-02T00:24:58.012701856Z" level=info msg="TearDown network for sandbox \"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\" successfully" Jul 2 00:24:58.013548 containerd[1837]: time="2024-07-02T00:24:58.012733557Z" level=info msg="StopPodSandbox for \"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\" returns successfully" Jul 2 00:24:58.013548 containerd[1837]: time="2024-07-02T00:24:58.013222564Z" level=info msg="RemovePodSandbox for \"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\"" Jul 2 00:24:58.013548 containerd[1837]: time="2024-07-02T00:24:58.013261065Z" level=info msg="Forcibly stopping sandbox \"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\"" Jul 2 00:24:58.072988 containerd[1837]: 2024-07-02 00:24:58.045 [WARNING][5996] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--106c6d4ee2-k8s-csi--node--driver--586k2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"681633ee-4999-4e86-b7fb-b802b78615ed", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-106c6d4ee2", ContainerID:"d799e3b29b7d159db05cc19ec660ae34865bd3870ed1aa8c561cc1ec666b2f97", Pod:"csi-node-driver-586k2", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.14.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali1b39a402ba8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:58.072988 containerd[1837]: 2024-07-02 00:24:58.045 [INFO][5996] k8s.go 608: Cleaning up netns ContainerID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" Jul 2 00:24:58.072988 containerd[1837]: 2024-07-02 00:24:58.045 [INFO][5996] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" iface="eth0" netns="" Jul 2 00:24:58.072988 containerd[1837]: 2024-07-02 00:24:58.045 [INFO][5996] k8s.go 615: Releasing IP address(es) ContainerID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" Jul 2 00:24:58.072988 containerd[1837]: 2024-07-02 00:24:58.045 [INFO][5996] utils.go 188: Calico CNI releasing IP address ContainerID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" Jul 2 00:24:58.072988 containerd[1837]: 2024-07-02 00:24:58.063 [INFO][6002] ipam_plugin.go 411: Releasing address using handleID ContainerID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" HandleID="k8s-pod-network.32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-csi--node--driver--586k2-eth0" Jul 2 00:24:58.072988 containerd[1837]: 2024-07-02 00:24:58.063 [INFO][6002] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:58.072988 containerd[1837]: 2024-07-02 00:24:58.063 [INFO][6002] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:58.072988 containerd[1837]: 2024-07-02 00:24:58.068 [WARNING][6002] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" HandleID="k8s-pod-network.32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-csi--node--driver--586k2-eth0" Jul 2 00:24:58.072988 containerd[1837]: 2024-07-02 00:24:58.068 [INFO][6002] ipam_plugin.go 439: Releasing address using workloadID ContainerID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" HandleID="k8s-pod-network.32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-csi--node--driver--586k2-eth0" Jul 2 00:24:58.072988 containerd[1837]: 2024-07-02 00:24:58.069 [INFO][6002] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:58.072988 containerd[1837]: 2024-07-02 00:24:58.070 [INFO][5996] k8s.go 621: Teardown processing complete. ContainerID="32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5" Jul 2 00:24:58.072988 containerd[1837]: time="2024-07-02T00:24:58.071907081Z" level=info msg="TearDown network for sandbox \"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\" successfully" Jul 2 00:24:58.082108 containerd[1837]: time="2024-07-02T00:24:58.082066540Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:24:58.082242 containerd[1837]: time="2024-07-02T00:24:58.082167941Z" level=info msg="RemovePodSandbox \"32760b87c7094e30daaa19e216db96548eba5690c81b82ba742a7f3398fcdea5\" returns successfully" Jul 2 00:25:07.598000 kubelet[3384]: I0702 00:25:07.597891 3384 topology_manager.go:215] "Topology Admit Handler" podUID="62c3c2f0-9559-4fea-a3b4-383ea303b0a4" podNamespace="calico-apiserver" podName="calico-apiserver-94c68478d-ssc28" Jul 2 00:25:07.636522 kubelet[3384]: I0702 00:25:07.636328 3384 topology_manager.go:215] "Topology Admit Handler" podUID="9e4ab960-5eaa-4952-bd12-c3dc78bc27a2" podNamespace="calico-apiserver" podName="calico-apiserver-94c68478d-lg95d" Jul 2 00:25:07.670179 kubelet[3384]: I0702 00:25:07.670134 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/62c3c2f0-9559-4fea-a3b4-383ea303b0a4-calico-apiserver-certs\") pod \"calico-apiserver-94c68478d-ssc28\" (UID: \"62c3c2f0-9559-4fea-a3b4-383ea303b0a4\") " pod="calico-apiserver/calico-apiserver-94c68478d-ssc28" Jul 2 00:25:07.670348 kubelet[3384]: I0702 00:25:07.670205 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8mfk\" (UniqueName: \"kubernetes.io/projected/62c3c2f0-9559-4fea-a3b4-383ea303b0a4-kube-api-access-p8mfk\") pod \"calico-apiserver-94c68478d-ssc28\" (UID: \"62c3c2f0-9559-4fea-a3b4-383ea303b0a4\") " pod="calico-apiserver/calico-apiserver-94c68478d-ssc28" Jul 2 00:25:07.771700 kubelet[3384]: I0702 00:25:07.770861 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m7f6\" (UniqueName: \"kubernetes.io/projected/9e4ab960-5eaa-4952-bd12-c3dc78bc27a2-kube-api-access-9m7f6\") pod \"calico-apiserver-94c68478d-lg95d\" (UID: \"9e4ab960-5eaa-4952-bd12-c3dc78bc27a2\") " pod="calico-apiserver/calico-apiserver-94c68478d-lg95d" Jul 2 00:25:07.771700 kubelet[3384]: I0702 00:25:07.770940 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9e4ab960-5eaa-4952-bd12-c3dc78bc27a2-calico-apiserver-certs\") pod \"calico-apiserver-94c68478d-lg95d\" (UID: \"9e4ab960-5eaa-4952-bd12-c3dc78bc27a2\") " pod="calico-apiserver/calico-apiserver-94c68478d-lg95d" Jul 2 00:25:07.771700 kubelet[3384]: E0702 00:25:07.771417 3384 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 00:25:07.771700 kubelet[3384]: E0702 00:25:07.771544 3384 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62c3c2f0-9559-4fea-a3b4-383ea303b0a4-calico-apiserver-certs podName:62c3c2f0-9559-4fea-a3b4-383ea303b0a4 nodeName:}" failed. No retries permitted until 2024-07-02 00:25:08.271502244 +0000 UTC m=+131.196660953 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/62c3c2f0-9559-4fea-a3b4-383ea303b0a4-calico-apiserver-certs") pod "calico-apiserver-94c68478d-ssc28" (UID: "62c3c2f0-9559-4fea-a3b4-383ea303b0a4") : secret "calico-apiserver-certs" not found Jul 2 00:25:07.947340 containerd[1837]: time="2024-07-02T00:25:07.947293118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94c68478d-lg95d,Uid:9e4ab960-5eaa-4952-bd12-c3dc78bc27a2,Namespace:calico-apiserver,Attempt:0,}" Jul 2 00:25:08.099751 systemd-networkd[1407]: calid76f259cd03: Link UP Jul 2 00:25:08.101491 systemd-networkd[1407]: calid76f259cd03: Gained carrier Jul 2 00:25:08.121299 containerd[1837]: 2024-07-02 00:25:08.040 [INFO][6051] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--a--106c6d4ee2-k8s-calico--apiserver--94c68478d--lg95d-eth0 calico-apiserver-94c68478d- calico-apiserver 9e4ab960-5eaa-4952-bd12-c3dc78bc27a2 1081 0 2024-07-02 00:25:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:94c68478d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975.1.1-a-106c6d4ee2 calico-apiserver-94c68478d-lg95d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid76f259cd03 [] []}} ContainerID="1e527587c374f85b79891615cdfba7e08dda4f3ebd6053126f5541b821e41620" Namespace="calico-apiserver" Pod="calico-apiserver-94c68478d-lg95d" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-calico--apiserver--94c68478d--lg95d-" Jul 2 00:25:08.121299 containerd[1837]: 2024-07-02 00:25:08.040 [INFO][6051] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1e527587c374f85b79891615cdfba7e08dda4f3ebd6053126f5541b821e41620" Namespace="calico-apiserver" Pod="calico-apiserver-94c68478d-lg95d" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-calico--apiserver--94c68478d--lg95d-eth0" Jul 2 00:25:08.121299 containerd[1837]: 2024-07-02 00:25:08.063 [INFO][6061] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e527587c374f85b79891615cdfba7e08dda4f3ebd6053126f5541b821e41620" HandleID="k8s-pod-network.1e527587c374f85b79891615cdfba7e08dda4f3ebd6053126f5541b821e41620" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-calico--apiserver--94c68478d--lg95d-eth0" Jul 2 00:25:08.121299 containerd[1837]: 2024-07-02 00:25:08.071 [INFO][6061] ipam_plugin.go 264: Auto assigning IP ContainerID="1e527587c374f85b79891615cdfba7e08dda4f3ebd6053126f5541b821e41620" HandleID="k8s-pod-network.1e527587c374f85b79891615cdfba7e08dda4f3ebd6053126f5541b821e41620" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-calico--apiserver--94c68478d--lg95d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000265c40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975.1.1-a-106c6d4ee2", "pod":"calico-apiserver-94c68478d-lg95d", "timestamp":"2024-07-02 00:25:08.063572152 +0000 UTC"}, Hostname:"ci-3975.1.1-a-106c6d4ee2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:25:08.121299 containerd[1837]: 2024-07-02 00:25:08.071 [INFO][6061] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:08.121299 containerd[1837]: 2024-07-02 00:25:08.071 [INFO][6061] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:08.121299 containerd[1837]: 2024-07-02 00:25:08.071 [INFO][6061] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-a-106c6d4ee2' Jul 2 00:25:08.121299 containerd[1837]: 2024-07-02 00:25:08.073 [INFO][6061] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1e527587c374f85b79891615cdfba7e08dda4f3ebd6053126f5541b821e41620" host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:25:08.121299 containerd[1837]: 2024-07-02 00:25:08.076 [INFO][6061] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:25:08.121299 containerd[1837]: 2024-07-02 00:25:08.081 [INFO][6061] ipam.go 489: Trying affinity for 192.168.14.128/26 host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:25:08.121299 containerd[1837]: 2024-07-02 00:25:08.082 [INFO][6061] ipam.go 155: Attempting to load block cidr=192.168.14.128/26 host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:25:08.121299 containerd[1837]: 2024-07-02 00:25:08.085 [INFO][6061] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.14.128/26 host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:25:08.121299 containerd[1837]: 2024-07-02 00:25:08.085 [INFO][6061] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.14.128/26 handle="k8s-pod-network.1e527587c374f85b79891615cdfba7e08dda4f3ebd6053126f5541b821e41620" host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:25:08.121299 containerd[1837]: 2024-07-02 00:25:08.086 [INFO][6061] ipam.go 1685: Creating new handle: k8s-pod-network.1e527587c374f85b79891615cdfba7e08dda4f3ebd6053126f5541b821e41620 Jul 2 00:25:08.121299 containerd[1837]: 2024-07-02 00:25:08.089 [INFO][6061] ipam.go 1203: Writing block in order to claim IPs block=192.168.14.128/26 handle="k8s-pod-network.1e527587c374f85b79891615cdfba7e08dda4f3ebd6053126f5541b821e41620" host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:25:08.121299 containerd[1837]: 2024-07-02 00:25:08.093 [INFO][6061] ipam.go 1216: Successfully claimed IPs: [192.168.14.133/26] block=192.168.14.128/26 handle="k8s-pod-network.1e527587c374f85b79891615cdfba7e08dda4f3ebd6053126f5541b821e41620" host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:25:08.121299 containerd[1837]: 2024-07-02 00:25:08.093 [INFO][6061] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.14.133/26] handle="k8s-pod-network.1e527587c374f85b79891615cdfba7e08dda4f3ebd6053126f5541b821e41620" host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:25:08.121299 containerd[1837]: 2024-07-02 00:25:08.095 [INFO][6061] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:08.121299 containerd[1837]: 2024-07-02 00:25:08.095 [INFO][6061] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.14.133/26] IPv6=[] ContainerID="1e527587c374f85b79891615cdfba7e08dda4f3ebd6053126f5541b821e41620" HandleID="k8s-pod-network.1e527587c374f85b79891615cdfba7e08dda4f3ebd6053126f5541b821e41620" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-calico--apiserver--94c68478d--lg95d-eth0" Jul 2 00:25:08.123783 containerd[1837]: 2024-07-02 00:25:08.096 [INFO][6051] k8s.go 386: Populated endpoint ContainerID="1e527587c374f85b79891615cdfba7e08dda4f3ebd6053126f5541b821e41620" Namespace="calico-apiserver" Pod="calico-apiserver-94c68478d-lg95d" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-calico--apiserver--94c68478d--lg95d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--106c6d4ee2-k8s-calico--apiserver--94c68478d--lg95d-eth0", GenerateName:"calico-apiserver-94c68478d-", Namespace:"calico-apiserver", SelfLink:"", UID:"9e4ab960-5eaa-4952-bd12-c3dc78bc27a2", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 25, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94c68478d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-106c6d4ee2", ContainerID:"", Pod:"calico-apiserver-94c68478d-lg95d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid76f259cd03", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:08.123783 containerd[1837]: 2024-07-02 00:25:08.096 [INFO][6051] k8s.go 387: Calico CNI using IPs: [192.168.14.133/32] ContainerID="1e527587c374f85b79891615cdfba7e08dda4f3ebd6053126f5541b821e41620" Namespace="calico-apiserver" Pod="calico-apiserver-94c68478d-lg95d" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-calico--apiserver--94c68478d--lg95d-eth0" Jul 2 00:25:08.123783 containerd[1837]: 2024-07-02 00:25:08.096 [INFO][6051] dataplane_linux.go 68: Setting the host side veth name to calid76f259cd03 ContainerID="1e527587c374f85b79891615cdfba7e08dda4f3ebd6053126f5541b821e41620" Namespace="calico-apiserver" Pod="calico-apiserver-94c68478d-lg95d" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-calico--apiserver--94c68478d--lg95d-eth0" Jul 2 00:25:08.123783 containerd[1837]: 2024-07-02 00:25:08.101 [INFO][6051] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1e527587c374f85b79891615cdfba7e08dda4f3ebd6053126f5541b821e41620" Namespace="calico-apiserver" Pod="calico-apiserver-94c68478d-lg95d" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-calico--apiserver--94c68478d--lg95d-eth0" Jul 2 00:25:08.123783 containerd[1837]: 2024-07-02 00:25:08.102 [INFO][6051] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1e527587c374f85b79891615cdfba7e08dda4f3ebd6053126f5541b821e41620" Namespace="calico-apiserver" Pod="calico-apiserver-94c68478d-lg95d" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-calico--apiserver--94c68478d--lg95d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--106c6d4ee2-k8s-calico--apiserver--94c68478d--lg95d-eth0", GenerateName:"calico-apiserver-94c68478d-", Namespace:"calico-apiserver", SelfLink:"", UID:"9e4ab960-5eaa-4952-bd12-c3dc78bc27a2", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 25, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94c68478d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-106c6d4ee2", ContainerID:"1e527587c374f85b79891615cdfba7e08dda4f3ebd6053126f5541b821e41620", Pod:"calico-apiserver-94c68478d-lg95d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid76f259cd03", MAC:"fe:41:57:da:84:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:08.123783 containerd[1837]: 2024-07-02 00:25:08.114 [INFO][6051] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1e527587c374f85b79891615cdfba7e08dda4f3ebd6053126f5541b821e41620" Namespace="calico-apiserver" Pod="calico-apiserver-94c68478d-lg95d" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-calico--apiserver--94c68478d--lg95d-eth0" Jul 2 00:25:08.167642 containerd[1837]: time="2024-07-02T00:25:08.167562793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:25:08.167942 containerd[1837]: time="2024-07-02T00:25:08.167681495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:08.167942 containerd[1837]: time="2024-07-02T00:25:08.167720695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:25:08.167942 containerd[1837]: time="2024-07-02T00:25:08.167744596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:08.221144 containerd[1837]: time="2024-07-02T00:25:08.221038036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94c68478d-lg95d,Uid:9e4ab960-5eaa-4952-bd12-c3dc78bc27a2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1e527587c374f85b79891615cdfba7e08dda4f3ebd6053126f5541b821e41620\"" Jul 2 00:25:08.223531 containerd[1837]: time="2024-07-02T00:25:08.223356773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 00:25:08.515500 containerd[1837]: time="2024-07-02T00:25:08.515373580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94c68478d-ssc28,Uid:62c3c2f0-9559-4fea-a3b4-383ea303b0a4,Namespace:calico-apiserver,Attempt:0,}" Jul 2 00:25:08.654536 systemd-networkd[1407]: calif7feac0b96f: Link UP Jul 2 00:25:08.655651 systemd-networkd[1407]: calif7feac0b96f: Gained carrier Jul 2 00:25:08.676908 containerd[1837]: 2024-07-02 00:25:08.580 [INFO][6125] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--a--106c6d4ee2-k8s-calico--apiserver--94c68478d--ssc28-eth0 calico-apiserver-94c68478d- calico-apiserver 62c3c2f0-9559-4fea-a3b4-383ea303b0a4 1075 0 2024-07-02 00:25:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:94c68478d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975.1.1-a-106c6d4ee2 calico-apiserver-94c68478d-ssc28 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif7feac0b96f [] []}} ContainerID="fcebd661c03cfa5bece046dc657280fdfde407e01ba57efd3e9c4c40f179ebae" Namespace="calico-apiserver" Pod="calico-apiserver-94c68478d-ssc28" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-calico--apiserver--94c68478d--ssc28-" Jul 2 00:25:08.676908 containerd[1837]: 2024-07-02 00:25:08.580 [INFO][6125] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fcebd661c03cfa5bece046dc657280fdfde407e01ba57efd3e9c4c40f179ebae" Namespace="calico-apiserver" Pod="calico-apiserver-94c68478d-ssc28" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-calico--apiserver--94c68478d--ssc28-eth0" Jul 2 00:25:08.676908 containerd[1837]: 2024-07-02 00:25:08.615 [INFO][6136] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fcebd661c03cfa5bece046dc657280fdfde407e01ba57efd3e9c4c40f179ebae" HandleID="k8s-pod-network.fcebd661c03cfa5bece046dc657280fdfde407e01ba57efd3e9c4c40f179ebae" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-calico--apiserver--94c68478d--ssc28-eth0" Jul 2 00:25:08.676908 containerd[1837]: 2024-07-02 00:25:08.622 [INFO][6136] ipam_plugin.go 264: Auto assigning IP ContainerID="fcebd661c03cfa5bece046dc657280fdfde407e01ba57efd3e9c4c40f179ebae" HandleID="k8s-pod-network.fcebd661c03cfa5bece046dc657280fdfde407e01ba57efd3e9c4c40f179ebae" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-calico--apiserver--94c68478d--ssc28-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000267d00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975.1.1-a-106c6d4ee2", "pod":"calico-apiserver-94c68478d-ssc28", "timestamp":"2024-07-02 00:25:08.615124554 +0000 UTC"}, Hostname:"ci-3975.1.1-a-106c6d4ee2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:25:08.676908 containerd[1837]: 2024-07-02 00:25:08.622 [INFO][6136] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:08.676908 containerd[1837]: 2024-07-02 00:25:08.622 [INFO][6136] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:08.676908 containerd[1837]: 2024-07-02 00:25:08.622 [INFO][6136] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-a-106c6d4ee2' Jul 2 00:25:08.676908 containerd[1837]: 2024-07-02 00:25:08.624 [INFO][6136] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fcebd661c03cfa5bece046dc657280fdfde407e01ba57efd3e9c4c40f179ebae" host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:25:08.676908 containerd[1837]: 2024-07-02 00:25:08.628 [INFO][6136] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:25:08.676908 containerd[1837]: 2024-07-02 00:25:08.632 [INFO][6136] ipam.go 489: Trying affinity for 192.168.14.128/26 host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:25:08.676908 containerd[1837]: 2024-07-02 00:25:08.634 [INFO][6136] ipam.go 155: Attempting to load block cidr=192.168.14.128/26 host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:25:08.676908 containerd[1837]: 2024-07-02 00:25:08.637 [INFO][6136] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.14.128/26 host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:25:08.676908 containerd[1837]: 2024-07-02 00:25:08.637 [INFO][6136] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.14.128/26 handle="k8s-pod-network.fcebd661c03cfa5bece046dc657280fdfde407e01ba57efd3e9c4c40f179ebae" host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:25:08.676908 containerd[1837]: 2024-07-02 00:25:08.639 [INFO][6136] ipam.go 1685: Creating new handle: k8s-pod-network.fcebd661c03cfa5bece046dc657280fdfde407e01ba57efd3e9c4c40f179ebae Jul 2 00:25:08.676908 containerd[1837]: 2024-07-02 00:25:08.642 [INFO][6136] ipam.go 1203: Writing block in order to claim IPs block=192.168.14.128/26 handle="k8s-pod-network.fcebd661c03cfa5bece046dc657280fdfde407e01ba57efd3e9c4c40f179ebae" host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:25:08.676908 containerd[1837]: 2024-07-02 00:25:08.648 [INFO][6136] ipam.go 1216: Successfully claimed IPs: [192.168.14.134/26] block=192.168.14.128/26 handle="k8s-pod-network.fcebd661c03cfa5bece046dc657280fdfde407e01ba57efd3e9c4c40f179ebae" host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:25:08.676908 containerd[1837]: 2024-07-02 00:25:08.648 [INFO][6136] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.14.134/26] handle="k8s-pod-network.fcebd661c03cfa5bece046dc657280fdfde407e01ba57efd3e9c4c40f179ebae" host="ci-3975.1.1-a-106c6d4ee2" Jul 2 00:25:08.676908 containerd[1837]: 2024-07-02 00:25:08.648 [INFO][6136] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:08.676908 containerd[1837]: 2024-07-02 00:25:08.648 [INFO][6136] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.14.134/26] IPv6=[] ContainerID="fcebd661c03cfa5bece046dc657280fdfde407e01ba57efd3e9c4c40f179ebae" HandleID="k8s-pod-network.fcebd661c03cfa5bece046dc657280fdfde407e01ba57efd3e9c4c40f179ebae" Workload="ci--3975.1.1--a--106c6d4ee2-k8s-calico--apiserver--94c68478d--ssc28-eth0" Jul 2 00:25:08.677896 containerd[1837]: 2024-07-02 00:25:08.650 [INFO][6125] k8s.go 386: Populated endpoint ContainerID="fcebd661c03cfa5bece046dc657280fdfde407e01ba57efd3e9c4c40f179ebae" Namespace="calico-apiserver" Pod="calico-apiserver-94c68478d-ssc28" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-calico--apiserver--94c68478d--ssc28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--106c6d4ee2-k8s-calico--apiserver--94c68478d--ssc28-eth0", GenerateName:"calico-apiserver-94c68478d-", Namespace:"calico-apiserver", SelfLink:"", UID:"62c3c2f0-9559-4fea-a3b4-383ea303b0a4", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 25, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94c68478d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-106c6d4ee2", ContainerID:"", Pod:"calico-apiserver-94c68478d-ssc28", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif7feac0b96f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:08.677896 containerd[1837]: 2024-07-02 00:25:08.650 [INFO][6125] k8s.go 387: Calico CNI using IPs: [192.168.14.134/32] ContainerID="fcebd661c03cfa5bece046dc657280fdfde407e01ba57efd3e9c4c40f179ebae" Namespace="calico-apiserver" Pod="calico-apiserver-94c68478d-ssc28" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-calico--apiserver--94c68478d--ssc28-eth0" Jul 2 00:25:08.677896 containerd[1837]: 2024-07-02 00:25:08.650 [INFO][6125] dataplane_linux.go 68: Setting the host side veth name to calif7feac0b96f ContainerID="fcebd661c03cfa5bece046dc657280fdfde407e01ba57efd3e9c4c40f179ebae" Namespace="calico-apiserver" Pod="calico-apiserver-94c68478d-ssc28" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-calico--apiserver--94c68478d--ssc28-eth0" Jul 2 00:25:08.677896 containerd[1837]: 2024-07-02 00:25:08.655 [INFO][6125] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="fcebd661c03cfa5bece046dc657280fdfde407e01ba57efd3e9c4c40f179ebae" Namespace="calico-apiserver" Pod="calico-apiserver-94c68478d-ssc28" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-calico--apiserver--94c68478d--ssc28-eth0" Jul 2 00:25:08.677896 containerd[1837]: 2024-07-02 00:25:08.656 [INFO][6125] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fcebd661c03cfa5bece046dc657280fdfde407e01ba57efd3e9c4c40f179ebae" Namespace="calico-apiserver" Pod="calico-apiserver-94c68478d-ssc28" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-calico--apiserver--94c68478d--ssc28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--106c6d4ee2-k8s-calico--apiserver--94c68478d--ssc28-eth0", GenerateName:"calico-apiserver-94c68478d-", Namespace:"calico-apiserver", SelfLink:"", UID:"62c3c2f0-9559-4fea-a3b4-383ea303b0a4", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 25, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94c68478d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-106c6d4ee2", ContainerID:"fcebd661c03cfa5bece046dc657280fdfde407e01ba57efd3e9c4c40f179ebae", Pod:"calico-apiserver-94c68478d-ssc28", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif7feac0b96f", MAC:"26:a5:3a:7e:cb:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:08.677896 containerd[1837]: 2024-07-02 00:25:08.671 [INFO][6125] k8s.go 500: Wrote updated endpoint to datastore ContainerID="fcebd661c03cfa5bece046dc657280fdfde407e01ba57efd3e9c4c40f179ebae" Namespace="calico-apiserver" Pod="calico-apiserver-94c68478d-ssc28" WorkloadEndpoint="ci--3975.1.1--a--106c6d4ee2-k8s-calico--apiserver--94c68478d--ssc28-eth0" Jul 2 00:25:08.712645 containerd[1837]: time="2024-07-02T00:25:08.712451089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:25:08.712645 containerd[1837]: time="2024-07-02T00:25:08.712505190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:08.712645 containerd[1837]: time="2024-07-02T00:25:08.712536790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:25:08.712645 containerd[1837]: time="2024-07-02T00:25:08.712560091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:08.766034 containerd[1837]: time="2024-07-02T00:25:08.765898632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94c68478d-ssc28,Uid:62c3c2f0-9559-4fea-a3b4-383ea303b0a4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"fcebd661c03cfa5bece046dc657280fdfde407e01ba57efd3e9c4c40f179ebae\"" Jul 2 00:25:09.435823 systemd-networkd[1407]: calid76f259cd03: Gained IPv6LL Jul 2 00:25:09.693821 systemd-networkd[1407]: calif7feac0b96f: Gained IPv6LL Jul 2 00:25:13.755554 containerd[1837]: time="2024-07-02T00:25:13.755500251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:13.757603 containerd[1837]: time="2024-07-02T00:25:13.757538783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jul 2 00:25:13.761122 containerd[1837]: time="2024-07-02T00:25:13.761048938Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:13.765820 containerd[1837]: time="2024-07-02T00:25:13.765734812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:13.767358 containerd[1837]: time="2024-07-02T00:25:13.766634126Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 5.543221052s" Jul 2 00:25:13.767358 containerd[1837]: time="2024-07-02T00:25:13.766717328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jul 2 00:25:13.769614 containerd[1837]: time="2024-07-02T00:25:13.768191151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 00:25:13.769614 containerd[1837]: time="2024-07-02T00:25:13.769271268Z" level=info msg="CreateContainer within sandbox \"1e527587c374f85b79891615cdfba7e08dda4f3ebd6053126f5541b821e41620\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 00:25:13.805123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3172567620.mount: Deactivated successfully. Jul 2 00:25:13.810949 containerd[1837]: time="2024-07-02T00:25:13.810908325Z" level=info msg="CreateContainer within sandbox \"1e527587c374f85b79891615cdfba7e08dda4f3ebd6053126f5541b821e41620\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f883c871c7d0c9a6b19be00e6172aa796de2adf8a143ae67c11adc0e54e6bacc\"" Jul 2 00:25:13.811621 containerd[1837]: time="2024-07-02T00:25:13.811390132Z" level=info msg="StartContainer for \"f883c871c7d0c9a6b19be00e6172aa796de2adf8a143ae67c11adc0e54e6bacc\"" Jul 2 00:25:13.895125 containerd[1837]: time="2024-07-02T00:25:13.895062452Z" level=info msg="StartContainer for \"f883c871c7d0c9a6b19be00e6172aa796de2adf8a143ae67c11adc0e54e6bacc\" returns successfully" Jul 2 00:25:14.218617 containerd[1837]: time="2024-07-02T00:25:14.217736343Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:14.220635 containerd[1837]: time="2024-07-02T00:25:14.220588688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=77" Jul 2 00:25:14.223127 containerd[1837]: time="2024-07-02T00:25:14.223093028Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 454.861876ms" Jul 2 00:25:14.223241 containerd[1837]: time="2024-07-02T00:25:14.223224830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jul 2 00:25:14.226505 containerd[1837]: time="2024-07-02T00:25:14.226481581Z" level=info msg="CreateContainer within sandbox \"fcebd661c03cfa5bece046dc657280fdfde407e01ba57efd3e9c4c40f179ebae\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 00:25:14.267691 containerd[1837]: time="2024-07-02T00:25:14.267513328Z" level=info msg="CreateContainer within sandbox \"fcebd661c03cfa5bece046dc657280fdfde407e01ba57efd3e9c4c40f179ebae\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"71dfd8a2ac32f4497b8e5e999baadc7da66550a2f43d4842e890877582a33044\"" Jul 2 00:25:14.269776 containerd[1837]: time="2024-07-02T00:25:14.268481444Z" level=info msg="StartContainer for \"71dfd8a2ac32f4497b8e5e999baadc7da66550a2f43d4842e890877582a33044\"" Jul 2 00:25:14.362558 containerd[1837]: time="2024-07-02T00:25:14.362515427Z" level=info msg="StartContainer for \"71dfd8a2ac32f4497b8e5e999baadc7da66550a2f43d4842e890877582a33044\" returns successfully" Jul 2 00:25:14.636049 kubelet[3384]: I0702 00:25:14.635443 3384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-94c68478d-lg95d" podStartSLOduration=2.091238533 podCreationTimestamp="2024-07-02 00:25:07 +0000 UTC" firstStartedPulling="2024-07-02 00:25:08.222885465 +0000 UTC m=+131.148044074" lastFinishedPulling="2024-07-02 00:25:13.767044933 +0000 UTC m=+136.692203542" observedRunningTime="2024-07-02 00:25:14.634886693 +0000 UTC m=+137.560045302" watchObservedRunningTime="2024-07-02 00:25:14.635398001 +0000 UTC m=+137.560556710" Jul 2 00:25:14.675855 kubelet[3384]: I0702 00:25:14.674400 3384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-94c68478d-ssc28" podStartSLOduration=2.219612555 podCreationTimestamp="2024-07-02 00:25:07 +0000 UTC" firstStartedPulling="2024-07-02 00:25:08.768851579 +0000 UTC m=+131.694010188" lastFinishedPulling="2024-07-02 00:25:14.223588835 +0000 UTC m=+137.148747444" observedRunningTime="2024-07-02 00:25:14.656722135 +0000 UTC m=+137.581880744" watchObservedRunningTime="2024-07-02 00:25:14.674349811 +0000 UTC m=+137.599508420" Jul 2 00:25:29.748020 systemd[1]: Started sshd@7-10.200.8.10:22-10.200.16.10:48714.service - OpenSSH per-connection server daemon (10.200.16.10:48714). Jul 2 00:25:30.407092 sshd[6327]: Accepted publickey for core from 10.200.16.10 port 48714 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:25:30.408597 sshd[6327]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:30.412761 systemd-logind[1807]: New session 10 of user core. Jul 2 00:25:30.417904 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 00:25:30.927978 sshd[6327]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:30.932945 systemd[1]: sshd@7-10.200.8.10:22-10.200.16.10:48714.service: Deactivated successfully. Jul 2 00:25:30.938065 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:25:30.938978 systemd-logind[1807]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:25:30.939934 systemd-logind[1807]: Removed session 10. Jul 2 00:25:32.359044 systemd[1]: run-containerd-runc-k8s.io-bd88a681e955a3c07adf4d09c8219bb4020e29942ec58b9662193feaacb0e025-runc.esbE6v.mount: Deactivated successfully. Jul 2 00:25:36.048967 systemd[1]: Started sshd@8-10.200.8.10:22-10.200.16.10:48722.service - OpenSSH per-connection server daemon (10.200.16.10:48722). Jul 2 00:25:36.690524 sshd[6372]: Accepted publickey for core from 10.200.16.10 port 48722 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:25:36.692005 sshd[6372]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:36.696190 systemd-logind[1807]: New session 11 of user core. Jul 2 00:25:36.700921 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 00:25:37.204698 sshd[6372]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:37.210032 systemd[1]: sshd@8-10.200.8.10:22-10.200.16.10:48722.service: Deactivated successfully. Jul 2 00:25:37.214036 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:25:37.214317 systemd-logind[1807]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:25:37.215952 systemd-logind[1807]: Removed session 11. Jul 2 00:25:42.315989 systemd[1]: Started sshd@9-10.200.8.10:22-10.200.16.10:51744.service - OpenSSH per-connection server daemon (10.200.16.10:51744). Jul 2 00:25:42.976885 sshd[6390]: Accepted publickey for core from 10.200.16.10 port 51744 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:25:42.978499 sshd[6390]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:42.982999 systemd-logind[1807]: New session 12 of user core. Jul 2 00:25:42.987945 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 00:25:43.489178 sshd[6390]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:43.492754 systemd[1]: sshd@9-10.200.8.10:22-10.200.16.10:51744.service: Deactivated successfully. Jul 2 00:25:43.499205 systemd-logind[1807]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:25:43.499889 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:25:43.501055 systemd-logind[1807]: Removed session 12. Jul 2 00:25:48.599970 systemd[1]: Started sshd@10-10.200.8.10:22-10.200.16.10:55492.service - OpenSSH per-connection server daemon (10.200.16.10:55492). Jul 2 00:25:49.237067 sshd[6410]: Accepted publickey for core from 10.200.16.10 port 55492 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:25:49.238839 sshd[6410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:49.244502 systemd-logind[1807]: New session 13 of user core. Jul 2 00:25:49.248448 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 00:25:49.746332 sshd[6410]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:49.752525 systemd[1]: sshd@10-10.200.8.10:22-10.200.16.10:55492.service: Deactivated successfully. Jul 2 00:25:49.757251 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:25:49.757281 systemd-logind[1807]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:25:49.758639 systemd-logind[1807]: Removed session 13. Jul 2 00:25:49.859947 systemd[1]: Started sshd@11-10.200.8.10:22-10.200.16.10:55494.service - OpenSSH per-connection server daemon (10.200.16.10:55494). Jul 2 00:25:50.503050 sshd[6427]: Accepted publickey for core from 10.200.16.10 port 55494 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:25:50.504506 sshd[6427]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:50.508633 systemd-logind[1807]: New session 14 of user core. Jul 2 00:25:50.513150 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 00:25:51.657856 sshd[6427]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:51.661581 systemd[1]: sshd@11-10.200.8.10:22-10.200.16.10:55494.service: Deactivated successfully. Jul 2 00:25:51.667846 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:25:51.668965 systemd-logind[1807]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:25:51.670065 systemd-logind[1807]: Removed session 14. Jul 2 00:25:51.771071 systemd[1]: Started sshd@12-10.200.8.10:22-10.200.16.10:55496.service - OpenSSH per-connection server daemon (10.200.16.10:55496). Jul 2 00:25:52.413560 sshd[6439]: Accepted publickey for core from 10.200.16.10 port 55496 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:25:52.415264 sshd[6439]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:52.419869 systemd-logind[1807]: New session 15 of user core. Jul 2 00:25:52.423190 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 00:25:52.940753 sshd[6439]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:52.944391 systemd[1]: sshd@12-10.200.8.10:22-10.200.16.10:55496.service: Deactivated successfully. Jul 2 00:25:52.951592 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:25:52.952466 systemd-logind[1807]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:25:52.953434 systemd-logind[1807]: Removed session 15. Jul 2 00:25:58.041945 systemd[1]: Started sshd@13-10.200.8.10:22-10.200.16.10:55512.service - OpenSSH per-connection server daemon (10.200.16.10:55512). Jul 2 00:25:58.688809 sshd[6502]: Accepted publickey for core from 10.200.16.10 port 55512 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:25:58.690258 sshd[6502]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:58.694504 systemd-logind[1807]: New session 16 of user core. Jul 2 00:25:58.700166 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 00:25:59.200794 sshd[6502]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:59.204588 systemd[1]: sshd@13-10.200.8.10:22-10.200.16.10:55512.service: Deactivated successfully. Jul 2 00:25:59.210653 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:25:59.211544 systemd-logind[1807]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:25:59.212502 systemd-logind[1807]: Removed session 16. Jul 2 00:26:02.352414 systemd[1]: run-containerd-runc-k8s.io-bd88a681e955a3c07adf4d09c8219bb4020e29942ec58b9662193feaacb0e025-runc.2nELy5.mount: Deactivated successfully. Jul 2 00:26:04.315415 systemd[1]: Started sshd@14-10.200.8.10:22-10.200.16.10:58284.service - OpenSSH per-connection server daemon (10.200.16.10:58284). Jul 2 00:26:04.970524 sshd[6548]: Accepted publickey for core from 10.200.16.10 port 58284 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:26:04.972180 sshd[6548]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:04.976811 systemd-logind[1807]: New session 17 of user core. Jul 2 00:26:04.981045 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 00:26:05.485765 sshd[6548]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:05.489499 systemd[1]: sshd@14-10.200.8.10:22-10.200.16.10:58284.service: Deactivated successfully. Jul 2 00:26:05.495801 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:26:05.497086 systemd-logind[1807]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:26:05.497990 systemd-logind[1807]: Removed session 17. Jul 2 00:26:10.597157 systemd[1]: Started sshd@15-10.200.8.10:22-10.200.16.10:54096.service - OpenSSH per-connection server daemon (10.200.16.10:54096). Jul 2 00:26:11.239614 sshd[6570]: Accepted publickey for core from 10.200.16.10 port 54096 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:26:11.241130 sshd[6570]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:11.245303 systemd-logind[1807]: New session 18 of user core. Jul 2 00:26:11.252093 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 00:26:11.750981 sshd[6570]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:11.756815 systemd[1]: sshd@15-10.200.8.10:22-10.200.16.10:54096.service: Deactivated successfully. Jul 2 00:26:11.761379 systemd-logind[1807]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:26:11.761492 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:26:11.763895 systemd-logind[1807]: Removed session 18. Jul 2 00:26:16.862246 systemd[1]: Started sshd@16-10.200.8.10:22-10.200.16.10:54104.service - OpenSSH per-connection server daemon (10.200.16.10:54104). Jul 2 00:26:17.507826 sshd[6589]: Accepted publickey for core from 10.200.16.10 port 54104 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:26:17.509399 sshd[6589]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:17.513884 systemd-logind[1807]: New session 19 of user core. Jul 2 00:26:17.520965 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 00:26:18.040949 sshd[6589]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:18.044930 systemd[1]: sshd@16-10.200.8.10:22-10.200.16.10:54104.service: Deactivated successfully. Jul 2 00:26:18.050123 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:26:18.050984 systemd-logind[1807]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:26:18.051939 systemd-logind[1807]: Removed session 19. Jul 2 00:26:18.157084 systemd[1]: Started sshd@17-10.200.8.10:22-10.200.16.10:54116.service - OpenSSH per-connection server daemon (10.200.16.10:54116). Jul 2 00:26:18.803189 sshd[6614]: Accepted publickey for core from 10.200.16.10 port 54116 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:26:18.804744 sshd[6614]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:18.809559 systemd-logind[1807]: New session 20 of user core. Jul 2 00:26:18.814233 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 00:26:19.377090 sshd[6614]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:19.380717 systemd[1]: sshd@17-10.200.8.10:22-10.200.16.10:54116.service: Deactivated successfully. Jul 2 00:26:19.386603 systemd-logind[1807]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:26:19.387158 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:26:19.388487 systemd-logind[1807]: Removed session 20. Jul 2 00:26:19.489576 systemd[1]: Started sshd@18-10.200.8.10:22-10.200.16.10:60304.service - OpenSSH per-connection server daemon (10.200.16.10:60304). Jul 2 00:26:20.162277 sshd[6625]: Accepted publickey for core from 10.200.16.10 port 60304 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:26:20.163826 sshd[6625]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:20.168556 systemd-logind[1807]: New session 21 of user core. Jul 2 00:26:20.175084 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 00:26:21.646610 sshd[6625]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:21.653104 systemd[1]: sshd@18-10.200.8.10:22-10.200.16.10:60304.service: Deactivated successfully. Jul 2 00:26:21.658179 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 00:26:21.659309 systemd-logind[1807]: Session 21 logged out. Waiting for processes to exit. Jul 2 00:26:21.660480 systemd-logind[1807]: Removed session 21. Jul 2 00:26:21.758054 systemd[1]: Started sshd@19-10.200.8.10:22-10.200.16.10:60314.service - OpenSSH per-connection server daemon (10.200.16.10:60314). Jul 2 00:26:22.398458 sshd[6645]: Accepted publickey for core from 10.200.16.10 port 60314 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:26:22.400241 sshd[6645]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:22.405096 systemd-logind[1807]: New session 22 of user core. Jul 2 00:26:22.408220 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 00:26:23.161823 update_engine[1811]: I0702 00:26:23.161767 1811 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 2 00:26:23.161823 update_engine[1811]: I0702 00:26:23.161818 1811 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 2 00:26:23.162482 update_engine[1811]: I0702 00:26:23.162066 1811 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 2 00:26:23.162782 update_engine[1811]: I0702 00:26:23.162727 1811 omaha_request_params.cc:62] Current group set to beta Jul 2 00:26:23.163090 update_engine[1811]: I0702 00:26:23.162914 1811 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 2 00:26:23.163090 update_engine[1811]: I0702 00:26:23.162930 1811 update_attempter.cc:643] Scheduling an action processor start. Jul 2 00:26:23.163090 update_engine[1811]: I0702 00:26:23.162950 1811 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 00:26:23.163090 update_engine[1811]: I0702 00:26:23.162992 1811 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 2 00:26:23.163090 update_engine[1811]: I0702 00:26:23.163076 1811 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 2 00:26:23.163090 update_engine[1811]: I0702 00:26:23.163084 1811 omaha_request_action.cc:272] Request: Jul 2 00:26:23.163090 update_engine[1811]: Jul 2 00:26:23.163090 update_engine[1811]: Jul 2 00:26:23.163090 update_engine[1811]: Jul 2 00:26:23.163090 update_engine[1811]: Jul 2 00:26:23.163090 update_engine[1811]: Jul 2 00:26:23.163090 update_engine[1811]: Jul 2 00:26:23.163090 update_engine[1811]: Jul 2 00:26:23.163090 update_engine[1811]: Jul 2 00:26:23.163090 update_engine[1811]: I0702 00:26:23.163092 1811 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 00:26:23.165817 locksmithd[1859]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 2 00:26:23.166224 update_engine[1811]: I0702 00:26:23.165189 1811 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 00:26:23.166224 update_engine[1811]: I0702 00:26:23.165725 1811 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 00:26:23.180824 update_engine[1811]: E0702 00:26:23.180785 1811 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 00:26:23.180959 update_engine[1811]: I0702 00:26:23.180869 1811 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 2 00:26:23.209627 sshd[6645]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:23.214289 systemd[1]: sshd@19-10.200.8.10:22-10.200.16.10:60314.service: Deactivated successfully. Jul 2 00:26:23.219837 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 00:26:23.220876 systemd-logind[1807]: Session 22 logged out. Waiting for processes to exit. Jul 2 00:26:23.221888 systemd-logind[1807]: Removed session 22. Jul 2 00:26:23.321556 systemd[1]: Started sshd@20-10.200.8.10:22-10.200.16.10:60318.service - OpenSSH per-connection server daemon (10.200.16.10:60318). Jul 2 00:26:23.968326 sshd[6657]: Accepted publickey for core from 10.200.16.10 port 60318 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:26:23.970118 sshd[6657]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:23.974382 systemd-logind[1807]: New session 23 of user core. Jul 2 00:26:23.979906 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 00:26:24.477318 sshd[6657]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:24.482096 systemd[1]: sshd@20-10.200.8.10:22-10.200.16.10:60318.service: Deactivated successfully. Jul 2 00:26:24.487361 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 00:26:24.488245 systemd-logind[1807]: Session 23 logged out. Waiting for processes to exit. Jul 2 00:26:24.489213 systemd-logind[1807]: Removed session 23. Jul 2 00:26:27.094193 systemd[1]: run-containerd-runc-k8s.io-8368a7d2b939271ae58a2e1ff76a779dd29e14aced8cf673f7af030b07c0913c-runc.rkoxCg.mount: Deactivated successfully. Jul 2 00:26:29.590371 systemd[1]: Started sshd@21-10.200.8.10:22-10.200.16.10:58434.service - OpenSSH per-connection server daemon (10.200.16.10:58434). Jul 2 00:26:30.226277 sshd[6696]: Accepted publickey for core from 10.200.16.10 port 58434 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:26:30.228085 sshd[6696]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:30.233106 systemd-logind[1807]: New session 24 of user core. Jul 2 00:26:30.237962 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 00:26:30.733369 sshd[6696]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:30.737947 systemd[1]: sshd@21-10.200.8.10:22-10.200.16.10:58434.service: Deactivated successfully. Jul 2 00:26:30.742509 systemd-logind[1807]: Session 24 logged out. Waiting for processes to exit. Jul 2 00:26:30.742874 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 00:26:30.744850 systemd-logind[1807]: Removed session 24. Jul 2 00:26:33.161481 update_engine[1811]: I0702 00:26:33.161422 1811 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 00:26:33.162101 update_engine[1811]: I0702 00:26:33.161734 1811 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 00:26:33.162101 update_engine[1811]: I0702 00:26:33.162071 1811 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 00:26:33.195883 update_engine[1811]: E0702 00:26:33.195829 1811 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 00:26:33.196025 update_engine[1811]: I0702 00:26:33.195916 1811 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 2 00:26:35.847258 systemd[1]: Started sshd@22-10.200.8.10:22-10.200.16.10:58448.service - OpenSSH per-connection server daemon (10.200.16.10:58448). Jul 2 00:26:36.496191 sshd[6734]: Accepted publickey for core from 10.200.16.10 port 58448 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:26:36.498128 sshd[6734]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:36.503390 systemd-logind[1807]: New session 25 of user core. Jul 2 00:26:36.508092 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 00:26:37.007559 sshd[6734]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:37.011431 systemd[1]: sshd@22-10.200.8.10:22-10.200.16.10:58448.service: Deactivated successfully. Jul 2 00:26:37.017912 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 00:26:37.018786 systemd-logind[1807]: Session 25 logged out. Waiting for processes to exit. Jul 2 00:26:37.019709 systemd-logind[1807]: Removed session 25. Jul 2 00:26:42.119977 systemd[1]: Started sshd@23-10.200.8.10:22-10.200.16.10:45918.service - OpenSSH per-connection server daemon (10.200.16.10:45918). Jul 2 00:26:42.760576 sshd[6756]: Accepted publickey for core from 10.200.16.10 port 45918 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:26:42.762377 sshd[6756]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:42.775329 systemd-logind[1807]: New session 26 of user core. Jul 2 00:26:42.781952 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 00:26:43.162131 update_engine[1811]: I0702 00:26:43.161597 1811 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 00:26:43.162131 update_engine[1811]: I0702 00:26:43.161828 1811 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 00:26:43.162131 update_engine[1811]: I0702 00:26:43.162091 1811 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 00:26:43.166689 update_engine[1811]: E0702 00:26:43.166637 1811 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 00:26:43.166810 update_engine[1811]: I0702 00:26:43.166723 1811 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 2 00:26:43.269138 sshd[6756]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:43.274494 systemd[1]: sshd@23-10.200.8.10:22-10.200.16.10:45918.service: Deactivated successfully. Jul 2 00:26:43.279651 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 00:26:43.280749 systemd-logind[1807]: Session 26 logged out. Waiting for processes to exit. Jul 2 00:26:43.281727 systemd-logind[1807]: Removed session 26. Jul 2 00:26:48.383256 systemd[1]: Started sshd@24-10.200.8.10:22-10.200.16.10:45934.service - OpenSSH per-connection server daemon (10.200.16.10:45934). Jul 2 00:26:49.035816 sshd[6778]: Accepted publickey for core from 10.200.16.10 port 45934 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:26:49.037399 sshd[6778]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:49.042774 systemd-logind[1807]: New session 27 of user core. Jul 2 00:26:49.046929 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 00:26:49.552234 sshd[6778]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:49.555402 systemd[1]: sshd@24-10.200.8.10:22-10.200.16.10:45934.service: Deactivated successfully. Jul 2 00:26:49.560689 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 00:26:49.561619 systemd-logind[1807]: Session 27 logged out. Waiting for processes to exit. Jul 2 00:26:49.563211 systemd-logind[1807]: Removed session 27. Jul 2 00:26:53.165434 update_engine[1811]: I0702 00:26:53.164790 1811 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 00:26:53.165434 update_engine[1811]: I0702 00:26:53.165059 1811 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 00:26:53.165434 update_engine[1811]: I0702 00:26:53.165376 1811 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 00:26:53.180198 update_engine[1811]: E0702 00:26:53.180157 1811 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 00:26:53.180340 update_engine[1811]: I0702 00:26:53.180222 1811 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 2 00:26:53.180340 update_engine[1811]: I0702 00:26:53.180228 1811 omaha_request_action.cc:617] Omaha request response: Jul 2 00:26:53.180340 update_engine[1811]: E0702 00:26:53.180307 1811 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 2 00:26:53.180340 update_engine[1811]: I0702 00:26:53.180331 1811 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 2 00:26:53.180340 update_engine[1811]: I0702 00:26:53.180335 1811 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 00:26:53.180340 update_engine[1811]: I0702 00:26:53.180339 1811 update_attempter.cc:306] Processing Done. Jul 2 00:26:53.180561 update_engine[1811]: E0702 00:26:53.180356 1811 update_attempter.cc:619] Update failed. Jul 2 00:26:53.180561 update_engine[1811]: I0702 00:26:53.180361 1811 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 2 00:26:53.180561 update_engine[1811]: I0702 00:26:53.180364 1811 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 2 00:26:53.180561 update_engine[1811]: I0702 00:26:53.180369 1811 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 2 00:26:53.180561 update_engine[1811]: I0702 00:26:53.180452 1811 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 00:26:53.180561 update_engine[1811]: I0702 00:26:53.180474 1811 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 2 00:26:53.180561 update_engine[1811]: I0702 00:26:53.180480 1811 omaha_request_action.cc:272] Request: Jul 2 00:26:53.180561 update_engine[1811]: Jul 2 00:26:53.180561 update_engine[1811]: Jul 2 00:26:53.180561 update_engine[1811]: Jul 2 00:26:53.180561 update_engine[1811]: Jul 2 00:26:53.180561 update_engine[1811]: Jul 2 00:26:53.180561 update_engine[1811]: Jul 2 00:26:53.180561 update_engine[1811]: I0702 00:26:53.180485 1811 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 00:26:53.181060 update_engine[1811]: I0702 00:26:53.180627 1811 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 00:26:53.181105 update_engine[1811]: I0702 00:26:53.181077 1811 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 00:26:53.181276 locksmithd[1859]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 2 00:26:53.201567 update_engine[1811]: E0702 00:26:53.201517 1811 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 00:26:53.201759 update_engine[1811]: I0702 00:26:53.201596 1811 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 2 00:26:53.201759 update_engine[1811]: I0702 00:26:53.201606 1811 omaha_request_action.cc:617] Omaha request response: Jul 2 00:26:53.201759 update_engine[1811]: I0702 00:26:53.201612 1811 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 00:26:53.201759 update_engine[1811]: I0702 00:26:53.201618 1811 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 00:26:53.201759 update_engine[1811]: I0702 00:26:53.201622 1811 update_attempter.cc:306] Processing Done. Jul 2 00:26:53.201759 update_engine[1811]: I0702 00:26:53.201629 1811 update_attempter.cc:310] Error event sent. Jul 2 00:26:53.201759 update_engine[1811]: I0702 00:26:53.201639 1811 update_check_scheduler.cc:74] Next update check in 44m1s Jul 2 00:26:53.202143 locksmithd[1859]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 2 00:26:54.664982 systemd[1]: Started sshd@25-10.200.8.10:22-10.200.16.10:52482.service - OpenSSH per-connection server daemon (10.200.16.10:52482). Jul 2 00:26:55.312154 sshd[6796]: Accepted publickey for core from 10.200.16.10 port 52482 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:26:55.313914 sshd[6796]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:55.318239 systemd-logind[1807]: New session 28 of user core. Jul 2 00:26:55.323900 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 2 00:26:55.825897 sshd[6796]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:55.831035 systemd[1]: sshd@25-10.200.8.10:22-10.200.16.10:52482.service: Deactivated successfully. Jul 2 00:26:55.835487 systemd-logind[1807]: Session 28 logged out. Waiting for processes to exit. Jul 2 00:26:55.836907 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 00:26:55.838572 systemd-logind[1807]: Removed session 28. Jul 2 00:27:00.937005 systemd[1]: Started sshd@26-10.200.8.10:22-10.200.16.10:56690.service - OpenSSH per-connection server daemon (10.200.16.10:56690). Jul 2 00:27:01.575381 sshd[6854]: Accepted publickey for core from 10.200.16.10 port 56690 ssh2: RSA SHA256:Dl48MIrKQ9CdpX3fW+bjUXv4zPw6zX02OXQNjpRKH0I Jul 2 00:27:01.576975 sshd[6854]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:27:01.581109 systemd-logind[1807]: New session 29 of user core. Jul 2 00:27:01.583931 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 2 00:27:02.081304 sshd[6854]: pam_unix(sshd:session): session closed for user core Jul 2 00:27:02.085784 systemd[1]: sshd@26-10.200.8.10:22-10.200.16.10:56690.service: Deactivated successfully. Jul 2 00:27:02.092258 systemd-logind[1807]: Session 29 logged out. Waiting for processes to exit. Jul 2 00:27:02.093216 systemd[1]: session-29.scope: Deactivated successfully. Jul 2 00:27:02.094950 systemd-logind[1807]: Removed session 29. Jul 2 00:27:16.715860 containerd[1837]: time="2024-07-02T00:27:16.715746683Z" level=info msg="shim disconnected" id=4735e2e21bd46667cd513ea06101b747e0955efce6ea875a65c49217283e2710 namespace=k8s.io Jul 2 00:27:16.717129 containerd[1837]: time="2024-07-02T00:27:16.715877385Z" level=warning msg="cleaning up after shim disconnected" id=4735e2e21bd46667cd513ea06101b747e0955efce6ea875a65c49217283e2710 namespace=k8s.io Jul 2 00:27:16.717129 containerd[1837]: time="2024-07-02T00:27:16.715890885Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:27:16.717374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4735e2e21bd46667cd513ea06101b747e0955efce6ea875a65c49217283e2710-rootfs.mount: Deactivated successfully. Jul 2 00:27:16.906166 kubelet[3384]: I0702 00:27:16.905393 3384 scope.go:117] "RemoveContainer" containerID="4735e2e21bd46667cd513ea06101b747e0955efce6ea875a65c49217283e2710" Jul 2 00:27:16.908495 containerd[1837]: time="2024-07-02T00:27:16.908454697Z" level=info msg="CreateContainer within sandbox \"50f7d34da03706f296bebeffabfb22c7b94dd78b88f5821c2a638ae0957e2a35\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 2 00:27:16.948041 containerd[1837]: time="2024-07-02T00:27:16.948001915Z" level=info msg="CreateContainer within sandbox \"50f7d34da03706f296bebeffabfb22c7b94dd78b88f5821c2a638ae0957e2a35\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2d9533ced6fd31057e5361d6be25179fa726d9157e3f8665e846a1518ac6cbe1\"" Jul 2 00:27:16.948541 containerd[1837]: time="2024-07-02T00:27:16.948511023Z" level=info msg="StartContainer for \"2d9533ced6fd31057e5361d6be25179fa726d9157e3f8665e846a1518ac6cbe1\"" Jul 2 00:27:17.032186 containerd[1837]: time="2024-07-02T00:27:17.031539421Z" level=info msg="StartContainer for \"2d9533ced6fd31057e5361d6be25179fa726d9157e3f8665e846a1518ac6cbe1\" returns successfully" Jul 2 00:27:17.828430 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbcff62a80fa0c5e7eb514817b372a80c4cb6051bf6da902ece91eada39dc100-rootfs.mount: Deactivated successfully. Jul 2 00:27:17.829140 containerd[1837]: time="2024-07-02T00:27:17.829060194Z" level=info msg="shim disconnected" id=fbcff62a80fa0c5e7eb514817b372a80c4cb6051bf6da902ece91eada39dc100 namespace=k8s.io Jul 2 00:27:17.829140 containerd[1837]: time="2024-07-02T00:27:17.829122395Z" level=warning msg="cleaning up after shim disconnected" id=fbcff62a80fa0c5e7eb514817b372a80c4cb6051bf6da902ece91eada39dc100 namespace=k8s.io Jul 2 00:27:17.829140 containerd[1837]: time="2024-07-02T00:27:17.829132295Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:27:17.910591 kubelet[3384]: I0702 00:27:17.909991 3384 scope.go:117] "RemoveContainer" containerID="fbcff62a80fa0c5e7eb514817b372a80c4cb6051bf6da902ece91eada39dc100" Jul 2 00:27:17.912396 containerd[1837]: time="2024-07-02T00:27:17.912355297Z" level=info msg="CreateContainer within sandbox \"e1882cc3ce52f4d03e652cd34b09edf3677c8abb717ae6c9fafe54340f3cf22a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 2 00:27:17.951292 containerd[1837]: time="2024-07-02T00:27:17.951243105Z" level=info msg="CreateContainer within sandbox \"e1882cc3ce52f4d03e652cd34b09edf3677c8abb717ae6c9fafe54340f3cf22a\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"b7de91b63fcb694222e87aed9af9971a583a9c303d9b58fe7f9ce10538481fc1\"" Jul 2 00:27:17.951895 containerd[1837]: time="2024-07-02T00:27:17.951858914Z" level=info msg="StartContainer for \"b7de91b63fcb694222e87aed9af9971a583a9c303d9b58fe7f9ce10538481fc1\"" Jul 2 00:27:18.020195 containerd[1837]: time="2024-07-02T00:27:18.020066681Z" level=info msg="StartContainer for \"b7de91b63fcb694222e87aed9af9971a583a9c303d9b58fe7f9ce10538481fc1\" returns successfully" Jul 2 00:27:21.213875 kubelet[3384]: E0702 00:27:21.213281 3384 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-3975.1.1-a-106c6d4ee2.17de3dc69277437f", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-3975.1.1-a-106c6d4ee2", UID:"e8d864bcfc785c8c95d84a87dc393b85", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-3975.1.1-a-106c6d4ee2"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 27, 10, 772216703, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 27, 10, 772216703, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3975.1.1-a-106c6d4ee2"}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.10:35832->10.200.8.26:2379: read: connection timed out' (will not retry!) Jul 2 00:27:21.533677 kubelet[3384]: E0702 00:27:21.533553 3384 controller.go:193] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.10:36032->10.200.8.26:2379: read: connection timed out" Jul 2 00:27:21.564789 containerd[1837]: time="2024-07-02T00:27:21.563326394Z" level=info msg="shim disconnected" id=426aee5ae424802a6664e34568cea9cb08838174d20e5d284cbdf59f77ea62b9 namespace=k8s.io Jul 2 00:27:21.564789 containerd[1837]: time="2024-07-02T00:27:21.563558698Z" level=warning msg="cleaning up after shim disconnected" id=426aee5ae424802a6664e34568cea9cb08838174d20e5d284cbdf59f77ea62b9 namespace=k8s.io Jul 2 00:27:21.564789 containerd[1837]: time="2024-07-02T00:27:21.563572498Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:27:21.563753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-426aee5ae424802a6664e34568cea9cb08838174d20e5d284cbdf59f77ea62b9-rootfs.mount: Deactivated successfully. Jul 2 00:27:21.924562 kubelet[3384]: I0702 00:27:21.924533 3384 scope.go:117] "RemoveContainer" containerID="426aee5ae424802a6664e34568cea9cb08838174d20e5d284cbdf59f77ea62b9" Jul 2 00:27:21.926706 containerd[1837]: time="2024-07-02T00:27:21.926637776Z" level=info msg="CreateContainer within sandbox \"7123546f7793ed1687ecea72ccf14b60c045032d1b8371dc804de38cf6bbc0d4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 2 00:27:21.960266 containerd[1837]: time="2024-07-02T00:27:21.960226202Z" level=info msg="CreateContainer within sandbox \"7123546f7793ed1687ecea72ccf14b60c045032d1b8371dc804de38cf6bbc0d4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"e12db844a956be83d1ee91bab91892b86957361499a0a6e931a01191457ee4ec\"" Jul 2 00:27:21.960804 containerd[1837]: time="2024-07-02T00:27:21.960776210Z" level=info msg="StartContainer for \"e12db844a956be83d1ee91bab91892b86957361499a0a6e931a01191457ee4ec\"" Jul 2 00:27:22.039178 containerd[1837]: time="2024-07-02T00:27:22.039049934Z" level=info msg="StartContainer for \"e12db844a956be83d1ee91bab91892b86957361499a0a6e931a01191457ee4ec\" returns successfully" Jul 2 00:27:27.356763 kubelet[3384]: I0702 00:27:27.356712 3384 status_manager.go:853] "Failed to get status for pod" podUID="e6f0c8d0b9167096a5f0e06d2e6439b7" pod="kube-system/kube-controller-manager-ci-3975.1.1-a-106c6d4ee2" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.10:35954->10.200.8.26:2379: read: connection timed out" Jul 2 00:27:29.494962 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7de91b63fcb694222e87aed9af9971a583a9c303d9b58fe7f9ce10538481fc1-rootfs.mount: Deactivated successfully. Jul 2 00:27:29.515052 containerd[1837]: time="2024-07-02T00:27:29.514989277Z" level=info msg="shim disconnected" id=b7de91b63fcb694222e87aed9af9971a583a9c303d9b58fe7f9ce10538481fc1 namespace=k8s.io Jul 2 00:27:29.515545 containerd[1837]: time="2024-07-02T00:27:29.515057078Z" level=warning msg="cleaning up after shim disconnected" id=b7de91b63fcb694222e87aed9af9971a583a9c303d9b58fe7f9ce10538481fc1 namespace=k8s.io Jul 2 00:27:29.515545 containerd[1837]: time="2024-07-02T00:27:29.515069978Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:27:29.946377 kubelet[3384]: I0702 00:27:29.946336 3384 scope.go:117] "RemoveContainer" containerID="fbcff62a80fa0c5e7eb514817b372a80c4cb6051bf6da902ece91eada39dc100" Jul 2 00:27:29.947036 kubelet[3384]: I0702 00:27:29.946870 3384 scope.go:117] "RemoveContainer" containerID="b7de91b63fcb694222e87aed9af9971a583a9c303d9b58fe7f9ce10538481fc1" Jul 2 00:27:29.947455 kubelet[3384]: E0702 00:27:29.947280 3384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-76c4974c85-bz5nh_tigera-operator(1fab52b9-f219-4c41-934e-91f28b4d128c)\"" pod="tigera-operator/tigera-operator-76c4974c85-bz5nh" podUID="1fab52b9-f219-4c41-934e-91f28b4d128c" Jul 2 00:27:29.948385 containerd[1837]: time="2024-07-02T00:27:29.948311243Z" level=info msg="RemoveContainer for \"fbcff62a80fa0c5e7eb514817b372a80c4cb6051bf6da902ece91eada39dc100\"" Jul 2 00:27:29.958445 containerd[1837]: time="2024-07-02T00:27:29.958399401Z" level=info msg="RemoveContainer for \"fbcff62a80fa0c5e7eb514817b372a80c4cb6051bf6da902ece91eada39dc100\" returns successfully" Jul 2 00:27:31.534881 kubelet[3384]: E0702 00:27:31.534618 3384 controller.go:193] "Failed to update lease" err="Put \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-106c6d4ee2?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 00:27:41.537007 kubelet[3384]: E0702 00:27:41.535818 3384 controller.go:193] "Failed to update lease" err="Put \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-106c6d4ee2?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 00:27:45.164692 kubelet[3384]: I0702 00:27:45.164133 3384 scope.go:117] "RemoveContainer" containerID="b7de91b63fcb694222e87aed9af9971a583a9c303d9b58fe7f9ce10538481fc1" Jul 2 00:27:45.167792 containerd[1837]: time="2024-07-02T00:27:45.167722703Z" level=info msg="CreateContainer within sandbox \"e1882cc3ce52f4d03e652cd34b09edf3677c8abb717ae6c9fafe54340f3cf22a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:2,}" Jul 2 00:27:45.237124 containerd[1837]: time="2024-07-02T00:27:45.237079085Z" level=info msg="CreateContainer within sandbox \"e1882cc3ce52f4d03e652cd34b09edf3677c8abb717ae6c9fafe54340f3cf22a\" for &ContainerMetadata{Name:tigera-operator,Attempt:2,} returns container id \"b1a6e17b430a46679a6df3580b765ae252a561c0f917b42e501f24899c8f3974\"" Jul 2 00:27:45.237626 containerd[1837]: time="2024-07-02T00:27:45.237562092Z" level=info msg="StartContainer for \"b1a6e17b430a46679a6df3580b765ae252a561c0f917b42e501f24899c8f3974\"" Jul 2 00:27:45.296075 containerd[1837]: time="2024-07-02T00:27:45.296031904Z" level=info msg="StartContainer for \"b1a6e17b430a46679a6df3580b765ae252a561c0f917b42e501f24899c8f3974\" returns successfully"