Sep 4 17:29:17.083625 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:49:08 -00 2024 Sep 4 17:29:17.083652 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:29:17.083663 kernel: BIOS-provided physical RAM map: Sep 4 17:29:17.083670 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 17:29:17.083676 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 4 17:29:17.083682 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Sep 4 17:29:17.083692 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Sep 4 17:29:17.083702 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Sep 4 17:29:17.083710 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Sep 4 17:29:17.083716 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 4 17:29:17.083756 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 4 17:29:17.083762 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 4 17:29:17.083768 kernel: printk: bootconsole [earlyser0] enabled Sep 4 17:29:17.083775 kernel: NX (Execute Disable) protection: active Sep 4 17:29:17.083788 kernel: APIC: Static calls initialized Sep 4 17:29:17.083795 kernel: efi: EFI v2.7 by Microsoft Sep 4 17:29:17.083805 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 Sep 4 17:29:17.083813 kernel: SMBIOS 3.1.0 present. Sep 4 17:29:17.083820 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Sep 4 17:29:17.083828 kernel: Hypervisor detected: Microsoft Hyper-V Sep 4 17:29:17.083839 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Sep 4 17:29:17.083846 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Sep 4 17:29:17.083856 kernel: Hyper-V: Nested features: 0x1e0101 Sep 4 17:29:17.083863 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 4 17:29:17.083873 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 4 17:29:17.083882 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 4 17:29:17.083889 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 4 17:29:17.083900 kernel: tsc: Marking TSC unstable due to running on Hyper-V Sep 4 17:29:17.083908 kernel: tsc: Detected 2593.905 MHz processor Sep 4 17:29:17.083917 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 17:29:17.083925 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 17:29:17.083932 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Sep 4 17:29:17.083940 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 4 17:29:17.083952 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 17:29:17.083959 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Sep 4 17:29:17.083967 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Sep 4 17:29:17.083976 kernel: Using GB pages for direct mapping Sep 4 17:29:17.083983 kernel: Secure boot disabled Sep 4 17:29:17.083991 kernel: ACPI: Early table checksum verification disabled Sep 4 17:29:17.084000 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 4 17:29:17.084011 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:17.084023 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:17.084031 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Sep 4 17:29:17.084039 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 4 17:29:17.084049 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:17.084056 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:17.084065 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:17.084077 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:17.084084 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:17.084094 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:17.084102 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:17.084113 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 4 17:29:17.084121 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Sep 4 17:29:17.084129 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 4 17:29:17.084139 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 4 17:29:17.084148 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 4 17:29:17.084157 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 4 17:29:17.084166 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Sep 4 17:29:17.084174 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Sep 4 17:29:17.084183 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 4 17:29:17.084192 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Sep 4 17:29:17.084199 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 4 17:29:17.084209 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 4 17:29:17.084217 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Sep 4 17:29:17.084226 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Sep 4 17:29:17.084237 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Sep 4 17:29:17.084245 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Sep 4 17:29:17.084252 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Sep 4 17:29:17.084262 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Sep 4 17:29:17.084270 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Sep 4 17:29:17.084277 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Sep 4 17:29:17.084288 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Sep 4 17:29:17.084296 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Sep 4 17:29:17.084306 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Sep 4 17:29:17.084316 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Sep 4 17:29:17.084323 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Sep 4 17:29:17.084333 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Sep 4 17:29:17.084341 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Sep 4 17:29:17.084349 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Sep 4 17:29:17.084358 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Sep 4 17:29:17.084367 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Sep 4 17:29:17.084374 kernel: Zone ranges: Sep 4 17:29:17.084387 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 17:29:17.084394 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 4 17:29:17.084402 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 4 17:29:17.084412 kernel: Movable zone start for each node Sep 4 17:29:17.084419 kernel: Early memory node ranges Sep 4 17:29:17.084427 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 4 17:29:17.084438 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Sep 4 17:29:17.084445 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 4 17:29:17.084453 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 4 17:29:17.084465 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 4 17:29:17.084473 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:29:17.084483 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 4 17:29:17.084491 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Sep 4 17:29:17.084501 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 4 17:29:17.084509 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Sep 4 17:29:17.084521 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Sep 4 17:29:17.084535 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 17:29:17.084549 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 17:29:17.084574 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 4 17:29:17.084590 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 4 17:29:17.084606 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 4 17:29:17.084624 kernel: Booting paravirtualized kernel on Hyper-V Sep 4 17:29:17.084642 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 17:29:17.084659 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 17:29:17.084675 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Sep 4 17:29:17.084690 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Sep 4 17:29:17.084703 kernel: pcpu-alloc: [0] 0 1 Sep 4 17:29:17.084730 kernel: Hyper-V: PV spinlocks enabled Sep 4 17:29:17.084748 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 17:29:17.084767 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:29:17.084784 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:29:17.084800 kernel: random: crng init done Sep 4 17:29:17.084815 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 4 17:29:17.084833 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:29:17.084847 kernel: Fallback order for Node 0: 0 Sep 4 17:29:17.084869 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Sep 4 17:29:17.084900 kernel: Policy zone: Normal Sep 4 17:29:17.084921 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:29:17.084936 kernel: software IO TLB: area num 2. Sep 4 17:29:17.084952 kernel: Memory: 8070932K/8387460K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49336K init, 2008K bss, 316268K reserved, 0K cma-reserved) Sep 4 17:29:17.084970 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 17:29:17.084986 kernel: ftrace: allocating 37670 entries in 148 pages Sep 4 17:29:17.085002 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 17:29:17.085020 kernel: Dynamic Preempt: voluntary Sep 4 17:29:17.085038 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:29:17.085057 kernel: rcu: RCU event tracing is enabled. Sep 4 17:29:17.085076 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 17:29:17.085094 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:29:17.085109 kernel: Rude variant of Tasks RCU enabled. Sep 4 17:29:17.085128 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:29:17.085144 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:29:17.085166 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 17:29:17.085187 kernel: Using NULL legacy PIC Sep 4 17:29:17.085205 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 4 17:29:17.085220 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:29:17.085236 kernel: Console: colour dummy device 80x25 Sep 4 17:29:17.085251 kernel: printk: console [tty1] enabled Sep 4 17:29:17.085264 kernel: printk: console [ttyS0] enabled Sep 4 17:29:17.085276 kernel: printk: bootconsole [earlyser0] disabled Sep 4 17:29:17.085290 kernel: ACPI: Core revision 20230628 Sep 4 17:29:17.085304 kernel: Failed to register legacy timer interrupt Sep 4 17:29:17.085320 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 17:29:17.085333 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 4 17:29:17.085347 kernel: Hyper-V: Using IPI hypercalls Sep 4 17:29:17.085361 kernel: APIC: send_IPI() replaced with hv_send_ipi() Sep 4 17:29:17.085374 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Sep 4 17:29:17.085390 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Sep 4 17:29:17.085404 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Sep 4 17:29:17.085418 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Sep 4 17:29:17.085432 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Sep 4 17:29:17.085448 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Sep 4 17:29:17.085462 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 4 17:29:17.085487 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Sep 4 17:29:17.085501 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 17:29:17.085514 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 17:29:17.085527 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 17:29:17.085541 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 4 17:29:17.085554 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 4 17:29:17.085568 kernel: RETBleed: Vulnerable Sep 4 17:29:17.085584 kernel: Speculative Store Bypass: Vulnerable Sep 4 17:29:17.085599 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 17:29:17.085614 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 17:29:17.085629 kernel: GDS: Unknown: Dependent on hypervisor status Sep 4 17:29:17.085644 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 17:29:17.085658 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 17:29:17.085673 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 17:29:17.085686 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 4 17:29:17.085700 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 4 17:29:17.085714 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 4 17:29:17.087752 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 17:29:17.087771 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 4 17:29:17.087780 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 4 17:29:17.087788 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 4 17:29:17.087799 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Sep 4 17:29:17.087807 kernel: Freeing SMP alternatives memory: 32K Sep 4 17:29:17.087817 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:29:17.087826 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Sep 4 17:29:17.087834 kernel: SELinux: Initializing. Sep 4 17:29:17.087846 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 17:29:17.087855 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 17:29:17.087866 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 4 17:29:17.087876 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:29:17.087888 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:29:17.087899 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:29:17.087909 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 4 17:29:17.087918 kernel: signal: max sigframe size: 3632 Sep 4 17:29:17.087928 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:29:17.087937 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:29:17.087948 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 4 17:29:17.087956 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:29:17.087965 kernel: smpboot: x86: Booting SMP configuration: Sep 4 17:29:17.087977 kernel: .... node #0, CPUs: #1 Sep 4 17:29:17.087985 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Sep 4 17:29:17.087998 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 4 17:29:17.088006 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 17:29:17.088015 kernel: smpboot: Max logical packages: 1 Sep 4 17:29:17.088025 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Sep 4 17:29:17.088033 kernel: devtmpfs: initialized Sep 4 17:29:17.088045 kernel: x86/mm: Memory block size: 128MB Sep 4 17:29:17.088056 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 4 17:29:17.088066 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:29:17.088075 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 17:29:17.088083 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:29:17.088094 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:29:17.088102 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:29:17.088111 kernel: audit: type=2000 audit(1725470955.028:1): state=initialized audit_enabled=0 res=1 Sep 4 17:29:17.088121 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:29:17.088129 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 17:29:17.088141 kernel: cpuidle: using governor menu Sep 4 17:29:17.088150 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:29:17.088158 kernel: dca service started, version 1.12.1 Sep 4 17:29:17.088170 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Sep 4 17:29:17.088178 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 17:29:17.088187 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:29:17.088197 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:29:17.088205 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:29:17.088215 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:29:17.088226 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:29:17.088234 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:29:17.088245 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:29:17.088253 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:29:17.088262 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:29:17.088273 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 17:29:17.088280 kernel: ACPI: Interpreter enabled Sep 4 17:29:17.088291 kernel: ACPI: PM: (supports S0 S5) Sep 4 17:29:17.088300 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 17:29:17.088313 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 17:29:17.088323 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 4 17:29:17.088333 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 4 17:29:17.088344 kernel: iommu: Default domain type: Translated Sep 4 17:29:17.088352 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 17:29:17.088362 kernel: efivars: Registered efivars operations Sep 4 17:29:17.088373 kernel: PCI: Using ACPI for IRQ routing Sep 4 17:29:17.088381 kernel: PCI: System does not support PCI Sep 4 17:29:17.088390 kernel: vgaarb: loaded Sep 4 17:29:17.088402 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Sep 4 17:29:17.088410 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:29:17.088421 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:29:17.088429 kernel: pnp: PnP ACPI init Sep 4 17:29:17.088438 kernel: pnp: PnP ACPI: found 3 devices Sep 4 17:29:17.088448 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 17:29:17.088456 kernel: NET: Registered PF_INET protocol family Sep 4 17:29:17.088466 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 17:29:17.088476 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 4 17:29:17.088486 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:29:17.088497 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:29:17.088505 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 4 17:29:17.088515 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 4 17:29:17.088524 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 17:29:17.088532 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 17:29:17.088544 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:29:17.088552 kernel: NET: Registered PF_XDP protocol family Sep 4 17:29:17.088560 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:29:17.088572 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 4 17:29:17.088580 kernel: software IO TLB: mapped [mem 0x000000003ae83000-0x000000003ee83000] (64MB) Sep 4 17:29:17.088591 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 4 17:29:17.088600 kernel: Initialise system trusted keyrings Sep 4 17:29:17.088607 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 4 17:29:17.088618 kernel: Key type asymmetric registered Sep 4 17:29:17.088626 kernel: Asymmetric key parser 'x509' registered Sep 4 17:29:17.088636 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 17:29:17.088645 kernel: io scheduler mq-deadline registered Sep 4 17:29:17.088655 kernel: io scheduler kyber registered Sep 4 17:29:17.088666 kernel: io scheduler bfq registered Sep 4 17:29:17.088674 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 17:29:17.088683 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:29:17.088693 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 17:29:17.088701 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 4 17:29:17.088712 kernel: i8042: PNP: No PS/2 controller found. Sep 4 17:29:17.088862 kernel: rtc_cmos 00:02: registered as rtc0 Sep 4 17:29:17.088959 kernel: rtc_cmos 00:02: setting system clock to 2024-09-04T17:29:16 UTC (1725470956) Sep 4 17:29:17.089045 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 4 17:29:17.089056 kernel: intel_pstate: CPU model not supported Sep 4 17:29:17.089067 kernel: efifb: probing for efifb Sep 4 17:29:17.089076 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 4 17:29:17.089084 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 4 17:29:17.089095 kernel: efifb: scrolling: redraw Sep 4 17:29:17.089103 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 4 17:29:17.089116 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 17:29:17.089124 kernel: fb0: EFI VGA frame buffer device Sep 4 17:29:17.089132 kernel: pstore: Using crash dump compression: deflate Sep 4 17:29:17.089143 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 17:29:17.089151 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:29:17.089161 kernel: Segment Routing with IPv6 Sep 4 17:29:17.089170 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:29:17.089178 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:29:17.089189 kernel: Key type dns_resolver registered Sep 4 17:29:17.089197 kernel: IPI shorthand broadcast: enabled Sep 4 17:29:17.089210 kernel: sched_clock: Marking stable (840002900, 47603400)->(1104406200, -216799900) Sep 4 17:29:17.089220 kernel: registered taskstats version 1 Sep 4 17:29:17.089229 kernel: Loading compiled-in X.509 certificates Sep 4 17:29:17.089241 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: a53bb4e7e3319f75620f709d8a6c7aef0adb3b02' Sep 4 17:29:17.089250 kernel: Key type .fscrypt registered Sep 4 17:29:17.089259 kernel: Key type fscrypt-provisioning registered Sep 4 17:29:17.089270 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:29:17.089278 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:29:17.089290 kernel: ima: No architecture policies found Sep 4 17:29:17.089299 kernel: clk: Disabling unused clocks Sep 4 17:29:17.089307 kernel: Freeing unused kernel image (initmem) memory: 49336K Sep 4 17:29:17.089319 kernel: Write protecting the kernel read-only data: 36864k Sep 4 17:29:17.089327 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Sep 4 17:29:17.089336 kernel: Run /init as init process Sep 4 17:29:17.089345 kernel: with arguments: Sep 4 17:29:17.089353 kernel: /init Sep 4 17:29:17.089363 kernel: with environment: Sep 4 17:29:17.089374 kernel: HOME=/ Sep 4 17:29:17.089381 kernel: TERM=linux Sep 4 17:29:17.089392 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:29:17.089402 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:29:17.089415 systemd[1]: Detected virtualization microsoft. Sep 4 17:29:17.089424 systemd[1]: Detected architecture x86-64. Sep 4 17:29:17.089433 systemd[1]: Running in initrd. Sep 4 17:29:17.089443 systemd[1]: No hostname configured, using default hostname. Sep 4 17:29:17.089453 systemd[1]: Hostname set to . Sep 4 17:29:17.089465 systemd[1]: Initializing machine ID from random generator. Sep 4 17:29:17.089474 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:29:17.089485 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:29:17.089496 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:29:17.089509 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:29:17.089520 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:29:17.089534 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:29:17.089552 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:29:17.089570 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:29:17.089585 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:29:17.089600 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:29:17.089615 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:29:17.089631 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:29:17.089646 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:29:17.089663 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:29:17.089678 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:29:17.089693 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:29:17.089708 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:29:17.091744 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:29:17.091761 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:29:17.091770 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:29:17.091781 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:29:17.091795 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:29:17.091805 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:29:17.091815 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:29:17.091824 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:29:17.091836 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:29:17.091844 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:29:17.091854 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:29:17.091865 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:29:17.091895 systemd-journald[176]: Collecting audit messages is disabled. Sep 4 17:29:17.091920 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:17.091931 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:29:17.091941 systemd-journald[176]: Journal started Sep 4 17:29:17.091964 systemd-journald[176]: Runtime Journal (/run/log/journal/e98395da61d54cfd99bbadca36b32eb4) is 8.0M, max 158.8M, 150.8M free. Sep 4 17:29:17.109479 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:29:17.110002 systemd-modules-load[177]: Inserted module 'overlay' Sep 4 17:29:17.112404 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:29:17.120769 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:29:17.133887 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:29:17.148990 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:29:17.157761 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:17.163218 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:29:17.184806 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:29:17.184956 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:29:17.192784 kernel: Bridge firewalling registered Sep 4 17:29:17.192842 systemd-modules-load[177]: Inserted module 'br_netfilter' Sep 4 17:29:17.195813 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:29:17.201758 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:29:17.213900 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:29:17.220982 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:29:17.228210 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:29:17.233159 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:29:17.243110 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:29:17.253281 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:29:17.259218 dracut-cmdline[207]: dracut-dracut-053 Sep 4 17:29:17.259218 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:29:17.280893 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:29:17.319848 systemd-resolved[243]: Positive Trust Anchors: Sep 4 17:29:17.319867 systemd-resolved[243]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:29:17.319917 systemd-resolved[243]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:29:17.345194 systemd-resolved[243]: Defaulting to hostname 'linux'. Sep 4 17:29:17.348547 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:29:17.354544 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:29:17.378742 kernel: SCSI subsystem initialized Sep 4 17:29:17.390737 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:29:17.404742 kernel: iscsi: registered transport (tcp) Sep 4 17:29:17.429600 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:29:17.429663 kernel: QLogic iSCSI HBA Driver Sep 4 17:29:17.464908 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:29:17.476851 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:29:17.511121 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:29:17.511180 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:29:17.514512 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:29:17.557753 kernel: raid6: avx512x4 gen() 18229 MB/s Sep 4 17:29:17.576731 kernel: raid6: avx512x2 gen() 18203 MB/s Sep 4 17:29:17.595732 kernel: raid6: avx512x1 gen() 18052 MB/s Sep 4 17:29:17.614735 kernel: raid6: avx2x4 gen() 18174 MB/s Sep 4 17:29:17.633733 kernel: raid6: avx2x2 gen() 18102 MB/s Sep 4 17:29:17.653756 kernel: raid6: avx2x1 gen() 13882 MB/s Sep 4 17:29:17.653801 kernel: raid6: using algorithm avx512x4 gen() 18229 MB/s Sep 4 17:29:17.675520 kernel: raid6: .... xor() 7755 MB/s, rmw enabled Sep 4 17:29:17.675549 kernel: raid6: using avx512x2 recovery algorithm Sep 4 17:29:17.701741 kernel: xor: automatically using best checksumming function avx Sep 4 17:29:17.868748 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:29:17.877875 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:29:17.887876 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:29:17.899631 systemd-udevd[394]: Using default interface naming scheme 'v255'. Sep 4 17:29:17.903980 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:29:17.917650 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:29:17.931425 dracut-pre-trigger[398]: rd.md=0: removing MD RAID activation Sep 4 17:29:17.958715 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:29:17.966367 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:29:18.006621 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:29:18.019256 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:29:18.049592 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:29:18.057404 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:29:18.064902 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:29:18.070673 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:29:18.080911 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:29:18.096870 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:29:18.113782 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 17:29:18.101218 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:29:18.104963 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:29:18.107951 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:29:18.108122 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:18.111414 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:18.136037 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:18.142675 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:29:18.155953 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:29:18.156056 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:18.174744 kernel: hv_vmbus: Vmbus version:5.2 Sep 4 17:29:18.175891 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:18.519185 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 17:29:18.519221 kernel: AES CTR mode by8 optimization enabled Sep 4 17:29:18.519768 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 4 17:29:18.519797 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 4 17:29:18.519816 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 4 17:29:18.519834 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 4 17:29:18.519852 kernel: PTP clock support registered Sep 4 17:29:18.519870 kernel: hv_utils: Registering HyperV Utility Driver Sep 4 17:29:18.519886 kernel: hv_vmbus: registering driver hv_utils Sep 4 17:29:18.519909 kernel: hv_utils: Heartbeat IC version 3.0 Sep 4 17:29:18.519925 kernel: hv_utils: Shutdown IC version 3.2 Sep 4 17:29:18.519943 kernel: hv_utils: TimeSync IC version 4.0 Sep 4 17:29:18.519972 kernel: hv_vmbus: registering driver hv_netvsc Sep 4 17:29:18.487673 systemd-resolved[243]: Clock change detected. Flushing caches. Sep 4 17:29:18.541526 kernel: hv_vmbus: registering driver hv_storvsc Sep 4 17:29:18.541805 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:18.553246 kernel: scsi host0: storvsc_host_t Sep 4 17:29:18.553300 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 17:29:18.555952 kernel: scsi host1: storvsc_host_t Sep 4 17:29:18.560627 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:29:18.567176 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 4 17:29:18.567409 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 4 17:29:18.578289 kernel: hv_vmbus: registering driver hid_hyperv Sep 4 17:29:18.585753 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 4 17:29:18.585789 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 4 17:29:18.595465 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 4 17:29:18.595673 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 17:29:18.601256 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 4 17:29:18.601426 kernel: hv_netvsc 000d3a67-722e-000d-3a67-722e000d3a67 eth0: VF slot 1 added Sep 4 17:29:18.622564 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:29:18.636270 kernel: hv_vmbus: registering driver hv_pci Sep 4 17:29:18.645889 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 4 17:29:18.646100 kernel: hv_pci 48d58ca8-ea40-4e3b-8d58-cdaed02d056b: PCI VMBus probing: Using version 0x10004 Sep 4 17:29:18.646245 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 4 17:29:18.646369 kernel: hv_pci 48d58ca8-ea40-4e3b-8d58-cdaed02d056b: PCI host bridge to bus ea40:00 Sep 4 17:29:18.654887 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 4 17:29:18.655112 kernel: pci_bus ea40:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Sep 4 17:29:18.655340 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 4 17:29:18.659278 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 4 17:29:18.659437 kernel: pci_bus ea40:00: No busn resource found for root bus, will use [bus 00-ff] Sep 4 17:29:18.662983 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:29:18.663016 kernel: pci ea40:00:02.0: [15b3:1016] type 00 class 0x020000 Sep 4 17:29:18.668012 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 4 17:29:18.668186 kernel: pci ea40:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 4 17:29:18.679246 kernel: pci ea40:00:02.0: enabling Extended Tags Sep 4 17:29:18.693259 kernel: pci ea40:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ea40:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Sep 4 17:29:18.694103 kernel: pci_bus ea40:00: busn_res: [bus 00-ff] end is updated to 00 Sep 4 17:29:18.699577 kernel: pci ea40:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 4 17:29:18.887035 kernel: mlx5_core ea40:00:02.0: enabling device (0000 -> 0002) Sep 4 17:29:18.892257 kernel: mlx5_core ea40:00:02.0: firmware version: 14.30.1284 Sep 4 17:29:19.108251 kernel: hv_netvsc 000d3a67-722e-000d-3a67-722e000d3a67 eth0: VF registering: eth1 Sep 4 17:29:19.108478 kernel: mlx5_core ea40:00:02.0 eth1: joined to eth0 Sep 4 17:29:19.115248 kernel: mlx5_core ea40:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 4 17:29:19.123250 kernel: mlx5_core ea40:00:02.0 enP59968s1: renamed from eth1 Sep 4 17:29:19.186450 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 4 17:29:19.263254 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (437) Sep 4 17:29:19.278311 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 4 17:29:19.325768 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 4 17:29:19.367253 kernel: BTRFS: device fsid d110be6f-93a3-451a-b365-11b5d04e0602 devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (448) Sep 4 17:29:19.381174 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 4 17:29:19.388204 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 4 17:29:19.401377 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:29:19.412288 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:29:19.420250 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:29:20.426291 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:29:20.427376 disk-uuid[601]: The operation has completed successfully. Sep 4 17:29:20.492604 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:29:20.492713 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:29:20.527404 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:29:20.533748 sh[687]: Success Sep 4 17:29:20.585256 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 4 17:29:20.798446 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:29:20.809022 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:29:20.811726 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:29:20.845251 kernel: BTRFS info (device dm-0): first mount of filesystem d110be6f-93a3-451a-b365-11b5d04e0602 Sep 4 17:29:20.845291 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:29:20.851221 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:29:20.854324 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:29:20.856895 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:29:21.202049 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:29:21.208099 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:29:21.218384 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:29:21.224844 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:29:21.241037 kernel: BTRFS info (device sda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:29:21.241088 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:29:21.243018 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:29:21.280538 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:29:21.290999 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:29:21.298286 kernel: BTRFS info (device sda6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:29:21.310323 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:29:21.323476 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:29:21.337847 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:29:21.349371 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:29:21.370999 systemd-networkd[871]: lo: Link UP Sep 4 17:29:21.371009 systemd-networkd[871]: lo: Gained carrier Sep 4 17:29:21.373057 systemd-networkd[871]: Enumeration completed Sep 4 17:29:21.373327 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:29:21.374598 systemd[1]: Reached target network.target - Network. Sep 4 17:29:21.389749 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:29:21.389758 systemd-networkd[871]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:29:21.452255 kernel: mlx5_core ea40:00:02.0 enP59968s1: Link up Sep 4 17:29:21.484959 kernel: hv_netvsc 000d3a67-722e-000d-3a67-722e000d3a67 eth0: Data path switched to VF: enP59968s1 Sep 4 17:29:21.484555 systemd-networkd[871]: enP59968s1: Link UP Sep 4 17:29:21.484678 systemd-networkd[871]: eth0: Link UP Sep 4 17:29:21.484876 systemd-networkd[871]: eth0: Gained carrier Sep 4 17:29:21.484889 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:29:21.489439 systemd-networkd[871]: enP59968s1: Gained carrier Sep 4 17:29:21.541322 systemd-networkd[871]: eth0: DHCPv4 address 10.200.8.37/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 4 17:29:22.691194 ignition[852]: Ignition 2.18.0 Sep 4 17:29:22.691206 ignition[852]: Stage: fetch-offline Sep 4 17:29:22.691263 ignition[852]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:22.691274 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:29:22.691448 ignition[852]: parsed url from cmdline: "" Sep 4 17:29:22.691454 ignition[852]: no config URL provided Sep 4 17:29:22.691464 ignition[852]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:29:22.691475 ignition[852]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:29:22.691482 ignition[852]: failed to fetch config: resource requires networking Sep 4 17:29:22.693339 ignition[852]: Ignition finished successfully Sep 4 17:29:22.713036 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:29:22.722476 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 17:29:22.737394 ignition[880]: Ignition 2.18.0 Sep 4 17:29:22.737404 ignition[880]: Stage: fetch Sep 4 17:29:22.737614 ignition[880]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:22.737627 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:29:22.739256 ignition[880]: parsed url from cmdline: "" Sep 4 17:29:22.739261 ignition[880]: no config URL provided Sep 4 17:29:22.739269 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:29:22.739281 ignition[880]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:29:22.740684 ignition[880]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 4 17:29:22.814382 ignition[880]: GET result: OK Sep 4 17:29:22.814495 ignition[880]: config has been read from IMDS userdata Sep 4 17:29:22.814536 ignition[880]: parsing config with SHA512: 4a829970494650b431bb8057696777fd158909c2fe9fd3b6527b843296b3e89942c7a0554c03facd44930d4812da0d22c7498c39c332b4b85bdab13e3fb29774 Sep 4 17:29:22.819879 unknown[880]: fetched base config from "system" Sep 4 17:29:22.820030 unknown[880]: fetched base config from "system" Sep 4 17:29:22.820652 ignition[880]: fetch: fetch complete Sep 4 17:29:22.820040 unknown[880]: fetched user config from "azure" Sep 4 17:29:22.820658 ignition[880]: fetch: fetch passed Sep 4 17:29:22.822445 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 17:29:22.820706 ignition[880]: Ignition finished successfully Sep 4 17:29:22.831374 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:29:22.851052 ignition[887]: Ignition 2.18.0 Sep 4 17:29:22.851063 ignition[887]: Stage: kargs Sep 4 17:29:22.851288 ignition[887]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:22.851302 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:29:22.857942 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:29:22.855219 ignition[887]: kargs: kargs passed Sep 4 17:29:22.855277 ignition[887]: Ignition finished successfully Sep 4 17:29:22.870733 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:29:22.884768 ignition[894]: Ignition 2.18.0 Sep 4 17:29:22.884778 ignition[894]: Stage: disks Sep 4 17:29:22.886607 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:29:22.884974 ignition[894]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:22.889793 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:29:22.884989 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:29:22.893929 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:29:22.885873 ignition[894]: disks: disks passed Sep 4 17:29:22.900257 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:29:22.885915 ignition[894]: Ignition finished successfully Sep 4 17:29:22.908125 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:29:22.923283 systemd-networkd[871]: eth0: Gained IPv6LL Sep 4 17:29:22.924592 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:29:22.934514 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:29:22.996294 systemd-fsck[903]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 4 17:29:23.000381 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:29:23.015327 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:29:23.040406 systemd-networkd[871]: enP59968s1: Gained IPv6LL Sep 4 17:29:23.117272 kernel: EXT4-fs (sda9): mounted filesystem 84a5cefa-c3c7-47d7-9305-7e6877f73628 r/w with ordered data mode. Quota mode: none. Sep 4 17:29:23.117798 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:29:23.120587 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:29:23.164365 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:29:23.170007 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:29:23.179258 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (914) Sep 4 17:29:23.186408 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 4 17:29:23.207327 kernel: BTRFS info (device sda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:29:23.207359 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:29:23.207386 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:29:23.207404 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:29:23.200699 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:29:23.200733 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:29:23.217347 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:29:23.220067 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:29:23.238670 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:29:24.094628 initrd-setup-root[939]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:29:24.119304 initrd-setup-root[946]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:29:24.125530 initrd-setup-root[953]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:29:24.130512 initrd-setup-root[960]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:29:24.250325 coreos-metadata[916]: Sep 04 17:29:24.250 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 4 17:29:24.256878 coreos-metadata[916]: Sep 04 17:29:24.256 INFO Fetch successful Sep 4 17:29:24.259645 coreos-metadata[916]: Sep 04 17:29:24.257 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 4 17:29:24.277392 coreos-metadata[916]: Sep 04 17:29:24.277 INFO Fetch successful Sep 4 17:29:24.290456 coreos-metadata[916]: Sep 04 17:29:24.290 INFO wrote hostname ci-3975.2.1-a-eeaffe6a3f to /sysroot/etc/hostname Sep 4 17:29:24.296805 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 17:29:24.949470 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:29:24.960341 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:29:24.967416 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:29:24.977027 kernel: BTRFS info (device sda6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:29:24.976448 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:29:25.005271 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:29:25.007913 ignition[1037]: INFO : Ignition 2.18.0 Sep 4 17:29:25.007913 ignition[1037]: INFO : Stage: mount Sep 4 17:29:25.011926 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:25.011926 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:29:25.013451 ignition[1037]: INFO : mount: mount passed Sep 4 17:29:25.013451 ignition[1037]: INFO : Ignition finished successfully Sep 4 17:29:25.024208 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:29:25.033336 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:29:25.048417 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:29:25.058247 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1049) Sep 4 17:29:25.058278 kernel: BTRFS info (device sda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:29:25.062243 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:29:25.066809 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:29:25.072253 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:29:25.073514 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:29:25.097091 ignition[1066]: INFO : Ignition 2.18.0 Sep 4 17:29:25.097091 ignition[1066]: INFO : Stage: files Sep 4 17:29:25.101323 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:25.101323 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:29:25.107943 ignition[1066]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:29:25.111606 ignition[1066]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:29:25.111606 ignition[1066]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:29:25.221735 ignition[1066]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:29:25.226695 ignition[1066]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:29:25.226695 ignition[1066]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:29:25.222217 unknown[1066]: wrote ssh authorized keys file for user: core Sep 4 17:29:25.244894 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 4 17:29:25.250034 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 4 17:29:25.254805 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:29:25.259857 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 17:29:25.447599 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 17:29:25.546600 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:29:25.546600 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:29:25.558052 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:29:25.558052 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:29:25.558052 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:29:25.558052 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:29:25.558052 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:29:25.558052 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:29:25.558052 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:29:25.558052 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:29:25.558052 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:29:25.558052 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:29:25.558052 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:29:25.558052 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:29:25.558052 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Sep 4 17:29:25.950993 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 17:29:26.262674 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:29:26.262674 ignition[1066]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 4 17:29:26.293991 ignition[1066]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 4 17:29:26.301554 ignition[1066]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 4 17:29:26.301554 ignition[1066]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 4 17:29:26.301554 ignition[1066]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 4 17:29:26.314168 ignition[1066]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:29:26.314168 ignition[1066]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:29:26.314168 ignition[1066]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 4 17:29:26.314168 ignition[1066]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:29:26.334117 ignition[1066]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:29:26.334117 ignition[1066]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:29:26.334117 ignition[1066]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:29:26.334117 ignition[1066]: INFO : files: files passed Sep 4 17:29:26.334117 ignition[1066]: INFO : Ignition finished successfully Sep 4 17:29:26.326844 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:29:26.348362 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:29:26.363423 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:29:26.367000 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:29:26.368516 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:29:26.381988 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:29:26.381988 initrd-setup-root-after-ignition[1095]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:29:26.393550 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:29:26.384615 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:29:26.401182 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:29:26.409409 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:29:26.442247 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:29:26.442347 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:29:26.449568 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:29:26.458450 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:29:26.461273 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:29:26.473629 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:29:26.486859 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:29:26.498418 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:29:26.510910 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:29:26.517241 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:29:26.523469 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:29:26.528219 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:29:26.531051 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:29:26.537597 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:29:26.543113 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:29:26.551560 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:29:26.557283 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:29:26.560493 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:29:26.569390 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:29:26.572268 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:29:26.578138 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:29:26.586816 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:29:26.592713 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:29:26.597130 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:29:26.598257 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:29:26.605743 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:29:26.608881 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:29:26.614950 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:29:26.617801 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:29:26.621393 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:29:26.628006 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:29:26.636129 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:29:26.636318 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:29:26.642638 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:29:26.642736 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:29:26.653651 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 4 17:29:26.653819 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 17:29:26.668028 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:29:26.673385 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:29:26.675605 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:29:26.675814 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:29:26.681917 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:29:26.684332 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:29:26.699304 ignition[1119]: INFO : Ignition 2.18.0 Sep 4 17:29:26.699304 ignition[1119]: INFO : Stage: umount Sep 4 17:29:26.699304 ignition[1119]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:26.699304 ignition[1119]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:29:26.699304 ignition[1119]: INFO : umount: umount passed Sep 4 17:29:26.699304 ignition[1119]: INFO : Ignition finished successfully Sep 4 17:29:26.699562 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:29:26.701491 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:29:26.722456 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:29:26.722572 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:29:26.728869 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:29:26.728970 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:29:26.734386 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:29:26.741014 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:29:26.744540 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 17:29:26.746927 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 17:29:26.754512 systemd[1]: Stopped target network.target - Network. Sep 4 17:29:26.756973 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:29:26.757022 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:29:26.762662 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:29:26.765161 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:29:26.765314 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:29:26.771278 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:29:26.773725 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:29:26.791921 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:29:26.791975 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:29:26.799431 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:29:26.799484 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:29:26.804574 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:29:26.804628 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:29:26.815243 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:29:26.815316 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:29:26.823483 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:29:26.829240 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:29:26.830279 systemd-networkd[871]: eth0: DHCPv6 lease lost Sep 4 17:29:26.837942 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:29:26.838525 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:29:26.838632 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:29:26.844287 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:29:26.844421 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:29:26.849963 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:29:26.850019 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:29:26.872532 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:29:26.878546 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:29:26.878611 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:29:26.888545 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:29:26.888601 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:29:26.893743 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:29:26.893790 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:29:26.896769 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:29:26.896816 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:29:26.913890 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:29:26.939788 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:29:26.939935 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:29:26.951331 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:29:26.951392 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:29:26.956953 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:29:26.978121 kernel: hv_netvsc 000d3a67-722e-000d-3a67-722e000d3a67 eth0: Data path switched from VF: enP59968s1 Sep 4 17:29:26.956995 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:29:26.957588 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:29:26.957629 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:29:26.958559 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:29:26.958596 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:29:26.959867 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:29:26.959903 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:29:26.975498 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:29:26.984089 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:29:26.984153 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:29:26.987556 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 17:29:26.987601 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:29:27.021566 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:29:27.021642 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:29:27.027452 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:29:27.027538 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:27.037028 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:29:27.037121 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:29:27.043890 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:29:27.044209 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:29:27.214124 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:29:27.214344 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:29:27.220307 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:29:27.225513 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:29:27.225571 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:29:27.237389 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:29:27.294896 systemd[1]: Switching root. Sep 4 17:29:27.327767 systemd-journald[176]: Journal stopped Sep 4 17:29:17.083625 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:49:08 -00 2024 Sep 4 17:29:17.083652 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:29:17.083663 kernel: BIOS-provided physical RAM map: Sep 4 17:29:17.083670 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 17:29:17.083676 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 4 17:29:17.083682 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Sep 4 17:29:17.083692 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Sep 4 17:29:17.083702 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Sep 4 17:29:17.083710 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Sep 4 17:29:17.083716 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 4 17:29:17.083756 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 4 17:29:17.083762 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 4 17:29:17.083768 kernel: printk: bootconsole [earlyser0] enabled Sep 4 17:29:17.083775 kernel: NX (Execute Disable) protection: active Sep 4 17:29:17.083788 kernel: APIC: Static calls initialized Sep 4 17:29:17.083795 kernel: efi: EFI v2.7 by Microsoft Sep 4 17:29:17.083805 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 Sep 4 17:29:17.083813 kernel: SMBIOS 3.1.0 present. Sep 4 17:29:17.083820 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Sep 4 17:29:17.083828 kernel: Hypervisor detected: Microsoft Hyper-V Sep 4 17:29:17.083839 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Sep 4 17:29:17.083846 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Sep 4 17:29:17.083856 kernel: Hyper-V: Nested features: 0x1e0101 Sep 4 17:29:17.083863 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 4 17:29:17.083873 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 4 17:29:17.083882 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 4 17:29:17.083889 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 4 17:29:17.083900 kernel: tsc: Marking TSC unstable due to running on Hyper-V Sep 4 17:29:17.083908 kernel: tsc: Detected 2593.905 MHz processor Sep 4 17:29:17.083917 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 17:29:17.083925 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 17:29:17.083932 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Sep 4 17:29:17.083940 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 4 17:29:17.083952 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 17:29:17.083959 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Sep 4 17:29:17.083967 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Sep 4 17:29:17.083976 kernel: Using GB pages for direct mapping Sep 4 17:29:17.083983 kernel: Secure boot disabled Sep 4 17:29:17.083991 kernel: ACPI: Early table checksum verification disabled Sep 4 17:29:17.084000 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 4 17:29:17.084011 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:17.084023 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:17.084031 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Sep 4 17:29:17.084039 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 4 17:29:17.084049 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:17.084056 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:17.084065 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:17.084077 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:17.084084 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:17.084094 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:17.084102 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:17.084113 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 4 17:29:17.084121 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Sep 4 17:29:17.084129 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 4 17:29:17.084139 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 4 17:29:17.084148 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 4 17:29:17.084157 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 4 17:29:17.084166 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Sep 4 17:29:17.084174 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Sep 4 17:29:17.084183 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 4 17:29:17.084192 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Sep 4 17:29:17.084199 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 4 17:29:17.084209 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 4 17:29:17.084217 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Sep 4 17:29:17.084226 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Sep 4 17:29:17.084237 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Sep 4 17:29:17.084245 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Sep 4 17:29:17.084252 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Sep 4 17:29:17.084262 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Sep 4 17:29:17.084270 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Sep 4 17:29:17.084277 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Sep 4 17:29:17.084288 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Sep 4 17:29:17.084296 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Sep 4 17:29:17.084306 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Sep 4 17:29:17.084316 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Sep 4 17:29:17.084323 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Sep 4 17:29:17.084333 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Sep 4 17:29:17.084341 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Sep 4 17:29:17.084349 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Sep 4 17:29:17.084358 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Sep 4 17:29:17.084367 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Sep 4 17:29:17.084374 kernel: Zone ranges: Sep 4 17:29:17.084387 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 17:29:17.084394 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 4 17:29:17.084402 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 4 17:29:17.084412 kernel: Movable zone start for each node Sep 4 17:29:17.084419 kernel: Early memory node ranges Sep 4 17:29:17.084427 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 4 17:29:17.084438 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Sep 4 17:29:17.084445 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 4 17:29:17.084453 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 4 17:29:17.084465 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 4 17:29:17.084473 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:29:17.084483 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 4 17:29:17.084491 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Sep 4 17:29:17.084501 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 4 17:29:17.084509 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Sep 4 17:29:17.084521 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Sep 4 17:29:17.084535 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 17:29:17.084549 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 17:29:17.084574 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 4 17:29:17.084590 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 4 17:29:17.084606 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 4 17:29:17.084624 kernel: Booting paravirtualized kernel on Hyper-V Sep 4 17:29:17.084642 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 17:29:17.084659 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 17:29:17.084675 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Sep 4 17:29:17.084690 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Sep 4 17:29:17.084703 kernel: pcpu-alloc: [0] 0 1 Sep 4 17:29:17.084730 kernel: Hyper-V: PV spinlocks enabled Sep 4 17:29:17.084748 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 17:29:17.084767 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:29:17.084784 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:29:17.084800 kernel: random: crng init done Sep 4 17:29:17.084815 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 4 17:29:17.084833 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:29:17.084847 kernel: Fallback order for Node 0: 0 Sep 4 17:29:17.084869 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Sep 4 17:29:17.084900 kernel: Policy zone: Normal Sep 4 17:29:17.084921 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:29:17.084936 kernel: software IO TLB: area num 2. Sep 4 17:29:17.084952 kernel: Memory: 8070932K/8387460K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49336K init, 2008K bss, 316268K reserved, 0K cma-reserved) Sep 4 17:29:17.084970 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 17:29:17.084986 kernel: ftrace: allocating 37670 entries in 148 pages Sep 4 17:29:17.085002 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 17:29:17.085020 kernel: Dynamic Preempt: voluntary Sep 4 17:29:17.085038 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:29:17.085057 kernel: rcu: RCU event tracing is enabled. Sep 4 17:29:17.085076 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 17:29:17.085094 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:29:17.085109 kernel: Rude variant of Tasks RCU enabled. Sep 4 17:29:17.085128 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:29:17.085144 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:29:17.085166 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 17:29:17.085187 kernel: Using NULL legacy PIC Sep 4 17:29:17.085205 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 4 17:29:17.085220 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:29:17.085236 kernel: Console: colour dummy device 80x25 Sep 4 17:29:17.085251 kernel: printk: console [tty1] enabled Sep 4 17:29:17.085264 kernel: printk: console [ttyS0] enabled Sep 4 17:29:17.085276 kernel: printk: bootconsole [earlyser0] disabled Sep 4 17:29:17.085290 kernel: ACPI: Core revision 20230628 Sep 4 17:29:17.085304 kernel: Failed to register legacy timer interrupt Sep 4 17:29:17.085320 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 17:29:17.085333 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 4 17:29:17.085347 kernel: Hyper-V: Using IPI hypercalls Sep 4 17:29:17.085361 kernel: APIC: send_IPI() replaced with hv_send_ipi() Sep 4 17:29:17.085374 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Sep 4 17:29:17.085390 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Sep 4 17:29:17.085404 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Sep 4 17:29:17.085418 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Sep 4 17:29:17.085432 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Sep 4 17:29:17.085448 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Sep 4 17:29:17.085462 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 4 17:29:17.085487 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Sep 4 17:29:17.085501 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 17:29:17.085514 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 17:29:17.085527 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 17:29:17.085541 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 4 17:29:17.085554 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 4 17:29:17.085568 kernel: RETBleed: Vulnerable Sep 4 17:29:17.085584 kernel: Speculative Store Bypass: Vulnerable Sep 4 17:29:17.085599 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 17:29:17.085614 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 17:29:17.085629 kernel: GDS: Unknown: Dependent on hypervisor status Sep 4 17:29:17.085644 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 17:29:17.085658 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 17:29:17.085673 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 17:29:17.085686 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 4 17:29:17.085700 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 4 17:29:17.085714 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 4 17:29:17.087752 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 17:29:17.087771 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 4 17:29:17.087780 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 4 17:29:17.087788 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 4 17:29:17.087799 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Sep 4 17:29:17.087807 kernel: Freeing SMP alternatives memory: 32K Sep 4 17:29:17.087817 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:29:17.087826 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Sep 4 17:29:17.087834 kernel: SELinux: Initializing. Sep 4 17:29:17.087846 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 17:29:17.087855 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 17:29:17.087866 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 4 17:29:17.087876 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:29:17.087888 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:29:17.087899 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:29:17.087909 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 4 17:29:17.087918 kernel: signal: max sigframe size: 3632 Sep 4 17:29:17.087928 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:29:17.087937 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:29:17.087948 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 4 17:29:17.087956 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:29:17.087965 kernel: smpboot: x86: Booting SMP configuration: Sep 4 17:29:17.087977 kernel: .... node #0, CPUs: #1 Sep 4 17:29:17.087985 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Sep 4 17:29:17.087998 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 4 17:29:17.088006 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 17:29:17.088015 kernel: smpboot: Max logical packages: 1 Sep 4 17:29:17.088025 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Sep 4 17:29:17.088033 kernel: devtmpfs: initialized Sep 4 17:29:17.088045 kernel: x86/mm: Memory block size: 128MB Sep 4 17:29:17.088056 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 4 17:29:17.088066 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:29:17.088075 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 17:29:17.088083 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:29:17.088094 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:29:17.088102 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:29:17.088111 kernel: audit: type=2000 audit(1725470955.028:1): state=initialized audit_enabled=0 res=1 Sep 4 17:29:17.088121 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:29:17.088129 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 17:29:17.088141 kernel: cpuidle: using governor menu Sep 4 17:29:17.088150 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:29:17.088158 kernel: dca service started, version 1.12.1 Sep 4 17:29:17.088170 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Sep 4 17:29:17.088178 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 17:29:17.088187 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:29:17.088197 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:29:17.088205 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:29:17.088215 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:29:17.088226 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:29:17.088234 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:29:17.088245 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:29:17.088253 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:29:17.088262 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:29:17.088273 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 17:29:17.088280 kernel: ACPI: Interpreter enabled Sep 4 17:29:17.088291 kernel: ACPI: PM: (supports S0 S5) Sep 4 17:29:17.088300 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 17:29:17.088313 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 17:29:17.088323 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 4 17:29:17.088333 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 4 17:29:17.088344 kernel: iommu: Default domain type: Translated Sep 4 17:29:17.088352 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 17:29:17.088362 kernel: efivars: Registered efivars operations Sep 4 17:29:17.088373 kernel: PCI: Using ACPI for IRQ routing Sep 4 17:29:17.088381 kernel: PCI: System does not support PCI Sep 4 17:29:17.088390 kernel: vgaarb: loaded Sep 4 17:29:17.088402 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Sep 4 17:29:17.088410 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:29:17.088421 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:29:17.088429 kernel: pnp: PnP ACPI init Sep 4 17:29:17.088438 kernel: pnp: PnP ACPI: found 3 devices Sep 4 17:29:17.088448 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 17:29:17.088456 kernel: NET: Registered PF_INET protocol family Sep 4 17:29:17.088466 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 17:29:17.088476 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 4 17:29:17.088486 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:29:17.088497 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:29:17.088505 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 4 17:29:17.088515 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 4 17:29:17.088524 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 17:29:17.088532 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 17:29:17.088544 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:29:17.088552 kernel: NET: Registered PF_XDP protocol family Sep 4 17:29:17.088560 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:29:17.088572 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 4 17:29:17.088580 kernel: software IO TLB: mapped [mem 0x000000003ae83000-0x000000003ee83000] (64MB) Sep 4 17:29:17.088591 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 4 17:29:17.088600 kernel: Initialise system trusted keyrings Sep 4 17:29:17.088607 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 4 17:29:17.088618 kernel: Key type asymmetric registered Sep 4 17:29:17.088626 kernel: Asymmetric key parser 'x509' registered Sep 4 17:29:17.088636 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 17:29:17.088645 kernel: io scheduler mq-deadline registered Sep 4 17:29:17.088655 kernel: io scheduler kyber registered Sep 4 17:29:17.088666 kernel: io scheduler bfq registered Sep 4 17:29:17.088674 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 17:29:17.088683 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:29:17.088693 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 17:29:17.088701 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 4 17:29:17.088712 kernel: i8042: PNP: No PS/2 controller found. Sep 4 17:29:17.088862 kernel: rtc_cmos 00:02: registered as rtc0 Sep 4 17:29:17.088959 kernel: rtc_cmos 00:02: setting system clock to 2024-09-04T17:29:16 UTC (1725470956) Sep 4 17:29:17.089045 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 4 17:29:17.089056 kernel: intel_pstate: CPU model not supported Sep 4 17:29:17.089067 kernel: efifb: probing for efifb Sep 4 17:29:17.089076 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 4 17:29:17.089084 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 4 17:29:17.089095 kernel: efifb: scrolling: redraw Sep 4 17:29:17.089103 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 4 17:29:17.089116 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 17:29:17.089124 kernel: fb0: EFI VGA frame buffer device Sep 4 17:29:17.089132 kernel: pstore: Using crash dump compression: deflate Sep 4 17:29:17.089143 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 17:29:17.089151 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:29:17.089161 kernel: Segment Routing with IPv6 Sep 4 17:29:17.089170 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:29:17.089178 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:29:17.089189 kernel: Key type dns_resolver registered Sep 4 17:29:17.089197 kernel: IPI shorthand broadcast: enabled Sep 4 17:29:17.089210 kernel: sched_clock: Marking stable (840002900, 47603400)->(1104406200, -216799900) Sep 4 17:29:17.089220 kernel: registered taskstats version 1 Sep 4 17:29:17.089229 kernel: Loading compiled-in X.509 certificates Sep 4 17:29:17.089241 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: a53bb4e7e3319f75620f709d8a6c7aef0adb3b02' Sep 4 17:29:17.089250 kernel: Key type .fscrypt registered Sep 4 17:29:17.089259 kernel: Key type fscrypt-provisioning registered Sep 4 17:29:17.089270 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:29:17.089278 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:29:17.089290 kernel: ima: No architecture policies found Sep 4 17:29:17.089299 kernel: clk: Disabling unused clocks Sep 4 17:29:17.089307 kernel: Freeing unused kernel image (initmem) memory: 49336K Sep 4 17:29:17.089319 kernel: Write protecting the kernel read-only data: 36864k Sep 4 17:29:17.089327 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Sep 4 17:29:17.089336 kernel: Run /init as init process Sep 4 17:29:17.089345 kernel: with arguments: Sep 4 17:29:17.089353 kernel: /init Sep 4 17:29:17.089363 kernel: with environment: Sep 4 17:29:17.089374 kernel: HOME=/ Sep 4 17:29:17.089381 kernel: TERM=linux Sep 4 17:29:17.089392 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:29:17.089402 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:29:17.089415 systemd[1]: Detected virtualization microsoft. Sep 4 17:29:17.089424 systemd[1]: Detected architecture x86-64. Sep 4 17:29:17.089433 systemd[1]: Running in initrd. Sep 4 17:29:17.089443 systemd[1]: No hostname configured, using default hostname. Sep 4 17:29:17.089453 systemd[1]: Hostname set to . Sep 4 17:29:17.089465 systemd[1]: Initializing machine ID from random generator. Sep 4 17:29:17.089474 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:29:17.089485 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:29:17.089496 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:29:17.089509 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:29:17.089520 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:29:17.089534 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:29:17.089552 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:29:17.089570 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:29:17.089585 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:29:17.089600 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:29:17.089615 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:29:17.089631 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:29:17.089646 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:29:17.089663 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:29:17.089678 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:29:17.089693 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:29:17.089708 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:29:17.091744 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:29:17.091761 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:29:17.091770 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:29:17.091781 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:29:17.091795 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:29:17.091805 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:29:17.091815 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:29:17.091824 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:29:17.091836 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:29:17.091844 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:29:17.091854 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:29:17.091865 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:29:17.091895 systemd-journald[176]: Collecting audit messages is disabled. Sep 4 17:29:17.091920 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:17.091931 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:29:17.091941 systemd-journald[176]: Journal started Sep 4 17:29:17.091964 systemd-journald[176]: Runtime Journal (/run/log/journal/e98395da61d54cfd99bbadca36b32eb4) is 8.0M, max 158.8M, 150.8M free. Sep 4 17:29:17.109479 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:29:17.110002 systemd-modules-load[177]: Inserted module 'overlay' Sep 4 17:29:17.112404 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:29:17.120769 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:29:17.133887 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:29:17.148990 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:29:17.157761 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:17.163218 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:29:17.184806 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:29:17.184956 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:29:17.192784 kernel: Bridge firewalling registered Sep 4 17:29:17.192842 systemd-modules-load[177]: Inserted module 'br_netfilter' Sep 4 17:29:17.195813 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:29:17.201758 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:29:17.213900 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:29:17.220982 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:29:17.228210 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:29:17.233159 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:29:17.243110 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:29:17.253281 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:29:17.259218 dracut-cmdline[207]: dracut-dracut-053 Sep 4 17:29:17.259218 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:29:17.280893 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:29:17.319848 systemd-resolved[243]: Positive Trust Anchors: Sep 4 17:29:17.319867 systemd-resolved[243]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:29:17.319917 systemd-resolved[243]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:29:17.345194 systemd-resolved[243]: Defaulting to hostname 'linux'. Sep 4 17:29:17.348547 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:29:17.354544 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:29:17.378742 kernel: SCSI subsystem initialized Sep 4 17:29:17.390737 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:29:17.404742 kernel: iscsi: registered transport (tcp) Sep 4 17:29:17.429600 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:29:17.429663 kernel: QLogic iSCSI HBA Driver Sep 4 17:29:17.464908 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:29:17.476851 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:29:17.511121 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:29:17.511180 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:29:17.514512 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:29:17.557753 kernel: raid6: avx512x4 gen() 18229 MB/s Sep 4 17:29:17.576731 kernel: raid6: avx512x2 gen() 18203 MB/s Sep 4 17:29:17.595732 kernel: raid6: avx512x1 gen() 18052 MB/s Sep 4 17:29:17.614735 kernel: raid6: avx2x4 gen() 18174 MB/s Sep 4 17:29:17.633733 kernel: raid6: avx2x2 gen() 18102 MB/s Sep 4 17:29:17.653756 kernel: raid6: avx2x1 gen() 13882 MB/s Sep 4 17:29:17.653801 kernel: raid6: using algorithm avx512x4 gen() 18229 MB/s Sep 4 17:29:17.675520 kernel: raid6: .... xor() 7755 MB/s, rmw enabled Sep 4 17:29:17.675549 kernel: raid6: using avx512x2 recovery algorithm Sep 4 17:29:17.701741 kernel: xor: automatically using best checksumming function avx Sep 4 17:29:17.868748 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:29:17.877875 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:29:17.887876 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:29:17.899631 systemd-udevd[394]: Using default interface naming scheme 'v255'. Sep 4 17:29:17.903980 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:29:17.917650 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:29:17.931425 dracut-pre-trigger[398]: rd.md=0: removing MD RAID activation Sep 4 17:29:17.958715 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:29:17.966367 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:29:18.006621 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:29:18.019256 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:29:18.049592 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:29:18.057404 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:29:18.064902 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:29:18.070673 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:29:18.080911 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:29:18.096870 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:29:18.113782 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 17:29:18.101218 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:29:18.104963 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:29:18.107951 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:29:18.108122 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:18.111414 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:18.136037 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:18.142675 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:29:18.155953 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:29:18.156056 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:18.174744 kernel: hv_vmbus: Vmbus version:5.2 Sep 4 17:29:18.175891 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:18.519185 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 17:29:18.519221 kernel: AES CTR mode by8 optimization enabled Sep 4 17:29:18.519768 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 4 17:29:18.519797 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 4 17:29:18.519816 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 4 17:29:18.519834 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 4 17:29:18.519852 kernel: PTP clock support registered Sep 4 17:29:18.519870 kernel: hv_utils: Registering HyperV Utility Driver Sep 4 17:29:18.519886 kernel: hv_vmbus: registering driver hv_utils Sep 4 17:29:18.519909 kernel: hv_utils: Heartbeat IC version 3.0 Sep 4 17:29:18.519925 kernel: hv_utils: Shutdown IC version 3.2 Sep 4 17:29:18.519943 kernel: hv_utils: TimeSync IC version 4.0 Sep 4 17:29:18.519972 kernel: hv_vmbus: registering driver hv_netvsc Sep 4 17:29:18.487673 systemd-resolved[243]: Clock change detected. Flushing caches. Sep 4 17:29:18.541526 kernel: hv_vmbus: registering driver hv_storvsc Sep 4 17:29:18.541805 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:18.553246 kernel: scsi host0: storvsc_host_t Sep 4 17:29:18.553300 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 17:29:18.555952 kernel: scsi host1: storvsc_host_t Sep 4 17:29:18.560627 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:29:18.567176 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 4 17:29:18.567409 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 4 17:29:18.578289 kernel: hv_vmbus: registering driver hid_hyperv Sep 4 17:29:18.585753 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 4 17:29:18.585789 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 4 17:29:18.595465 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 4 17:29:18.595673 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 17:29:18.601256 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 4 17:29:18.601426 kernel: hv_netvsc 000d3a67-722e-000d-3a67-722e000d3a67 eth0: VF slot 1 added Sep 4 17:29:18.622564 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:29:18.636270 kernel: hv_vmbus: registering driver hv_pci Sep 4 17:29:18.645889 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 4 17:29:18.646100 kernel: hv_pci 48d58ca8-ea40-4e3b-8d58-cdaed02d056b: PCI VMBus probing: Using version 0x10004 Sep 4 17:29:18.646245 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 4 17:29:18.646369 kernel: hv_pci 48d58ca8-ea40-4e3b-8d58-cdaed02d056b: PCI host bridge to bus ea40:00 Sep 4 17:29:18.654887 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 4 17:29:18.655112 kernel: pci_bus ea40:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Sep 4 17:29:18.655340 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 4 17:29:18.659278 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 4 17:29:18.659437 kernel: pci_bus ea40:00: No busn resource found for root bus, will use [bus 00-ff] Sep 4 17:29:18.662983 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:29:18.663016 kernel: pci ea40:00:02.0: [15b3:1016] type 00 class 0x020000 Sep 4 17:29:18.668012 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 4 17:29:18.668186 kernel: pci ea40:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 4 17:29:18.679246 kernel: pci ea40:00:02.0: enabling Extended Tags Sep 4 17:29:18.693259 kernel: pci ea40:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ea40:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Sep 4 17:29:18.694103 kernel: pci_bus ea40:00: busn_res: [bus 00-ff] end is updated to 00 Sep 4 17:29:18.699577 kernel: pci ea40:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 4 17:29:18.887035 kernel: mlx5_core ea40:00:02.0: enabling device (0000 -> 0002) Sep 4 17:29:18.892257 kernel: mlx5_core ea40:00:02.0: firmware version: 14.30.1284 Sep 4 17:29:19.108251 kernel: hv_netvsc 000d3a67-722e-000d-3a67-722e000d3a67 eth0: VF registering: eth1 Sep 4 17:29:19.108478 kernel: mlx5_core ea40:00:02.0 eth1: joined to eth0 Sep 4 17:29:19.115248 kernel: mlx5_core ea40:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 4 17:29:19.123250 kernel: mlx5_core ea40:00:02.0 enP59968s1: renamed from eth1 Sep 4 17:29:19.186450 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 4 17:29:19.263254 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (437) Sep 4 17:29:19.278311 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 4 17:29:19.325768 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 4 17:29:19.367253 kernel: BTRFS: device fsid d110be6f-93a3-451a-b365-11b5d04e0602 devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (448) Sep 4 17:29:19.381174 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 4 17:29:19.388204 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 4 17:29:19.401377 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:29:19.412288 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:29:19.420250 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:29:20.426291 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:29:20.427376 disk-uuid[601]: The operation has completed successfully. Sep 4 17:29:20.492604 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:29:20.492713 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:29:20.527404 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:29:20.533748 sh[687]: Success Sep 4 17:29:20.585256 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 4 17:29:20.798446 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:29:20.809022 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:29:20.811726 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:29:20.845251 kernel: BTRFS info (device dm-0): first mount of filesystem d110be6f-93a3-451a-b365-11b5d04e0602 Sep 4 17:29:20.845291 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:29:20.851221 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:29:20.854324 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:29:20.856895 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:29:21.202049 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:29:21.208099 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:29:21.218384 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:29:21.224844 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:29:21.241037 kernel: BTRFS info (device sda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:29:21.241088 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:29:21.243018 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:29:21.280538 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:29:21.290999 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:29:21.298286 kernel: BTRFS info (device sda6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:29:21.310323 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:29:21.323476 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:29:21.337847 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:29:21.349371 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:29:21.370999 systemd-networkd[871]: lo: Link UP Sep 4 17:29:21.371009 systemd-networkd[871]: lo: Gained carrier Sep 4 17:29:21.373057 systemd-networkd[871]: Enumeration completed Sep 4 17:29:21.373327 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:29:21.374598 systemd[1]: Reached target network.target - Network. Sep 4 17:29:21.389749 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:29:21.389758 systemd-networkd[871]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:29:21.452255 kernel: mlx5_core ea40:00:02.0 enP59968s1: Link up Sep 4 17:29:21.484959 kernel: hv_netvsc 000d3a67-722e-000d-3a67-722e000d3a67 eth0: Data path switched to VF: enP59968s1 Sep 4 17:29:21.484555 systemd-networkd[871]: enP59968s1: Link UP Sep 4 17:29:21.484678 systemd-networkd[871]: eth0: Link UP Sep 4 17:29:21.484876 systemd-networkd[871]: eth0: Gained carrier Sep 4 17:29:21.484889 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:29:21.489439 systemd-networkd[871]: enP59968s1: Gained carrier Sep 4 17:29:21.541322 systemd-networkd[871]: eth0: DHCPv4 address 10.200.8.37/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 4 17:29:22.691194 ignition[852]: Ignition 2.18.0 Sep 4 17:29:22.691206 ignition[852]: Stage: fetch-offline Sep 4 17:29:22.691263 ignition[852]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:22.691274 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:29:22.691448 ignition[852]: parsed url from cmdline: "" Sep 4 17:29:22.691454 ignition[852]: no config URL provided Sep 4 17:29:22.691464 ignition[852]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:29:22.691475 ignition[852]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:29:22.691482 ignition[852]: failed to fetch config: resource requires networking Sep 4 17:29:22.693339 ignition[852]: Ignition finished successfully Sep 4 17:29:22.713036 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:29:22.722476 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 17:29:22.737394 ignition[880]: Ignition 2.18.0 Sep 4 17:29:22.737404 ignition[880]: Stage: fetch Sep 4 17:29:22.737614 ignition[880]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:22.737627 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:29:22.739256 ignition[880]: parsed url from cmdline: "" Sep 4 17:29:22.739261 ignition[880]: no config URL provided Sep 4 17:29:22.739269 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:29:22.739281 ignition[880]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:29:22.740684 ignition[880]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 4 17:29:22.814382 ignition[880]: GET result: OK Sep 4 17:29:22.814495 ignition[880]: config has been read from IMDS userdata Sep 4 17:29:22.814536 ignition[880]: parsing config with SHA512: 4a829970494650b431bb8057696777fd158909c2fe9fd3b6527b843296b3e89942c7a0554c03facd44930d4812da0d22c7498c39c332b4b85bdab13e3fb29774 Sep 4 17:29:22.819879 unknown[880]: fetched base config from "system" Sep 4 17:29:22.820030 unknown[880]: fetched base config from "system" Sep 4 17:29:22.820652 ignition[880]: fetch: fetch complete Sep 4 17:29:22.820040 unknown[880]: fetched user config from "azure" Sep 4 17:29:22.820658 ignition[880]: fetch: fetch passed Sep 4 17:29:22.822445 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 17:29:22.820706 ignition[880]: Ignition finished successfully Sep 4 17:29:22.831374 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:29:22.851052 ignition[887]: Ignition 2.18.0 Sep 4 17:29:22.851063 ignition[887]: Stage: kargs Sep 4 17:29:22.851288 ignition[887]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:22.851302 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:29:22.857942 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:29:22.855219 ignition[887]: kargs: kargs passed Sep 4 17:29:22.855277 ignition[887]: Ignition finished successfully Sep 4 17:29:22.870733 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:29:22.884768 ignition[894]: Ignition 2.18.0 Sep 4 17:29:22.884778 ignition[894]: Stage: disks Sep 4 17:29:22.886607 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:29:22.884974 ignition[894]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:22.889793 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:29:22.884989 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:29:22.893929 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:29:22.885873 ignition[894]: disks: disks passed Sep 4 17:29:22.900257 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:29:22.885915 ignition[894]: Ignition finished successfully Sep 4 17:29:22.908125 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:29:22.923283 systemd-networkd[871]: eth0: Gained IPv6LL Sep 4 17:29:22.924592 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:29:22.934514 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:29:22.996294 systemd-fsck[903]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 4 17:29:23.000381 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:29:23.015327 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:29:23.040406 systemd-networkd[871]: enP59968s1: Gained IPv6LL Sep 4 17:29:23.117272 kernel: EXT4-fs (sda9): mounted filesystem 84a5cefa-c3c7-47d7-9305-7e6877f73628 r/w with ordered data mode. Quota mode: none. Sep 4 17:29:23.117798 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:29:23.120587 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:29:23.164365 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:29:23.170007 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:29:23.179258 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (914) Sep 4 17:29:23.186408 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 4 17:29:23.207327 kernel: BTRFS info (device sda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:29:23.207359 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:29:23.207386 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:29:23.207404 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:29:23.200699 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:29:23.200733 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:29:23.217347 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:29:23.220067 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:29:23.238670 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:29:24.094628 initrd-setup-root[939]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:29:24.119304 initrd-setup-root[946]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:29:24.125530 initrd-setup-root[953]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:29:24.130512 initrd-setup-root[960]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:29:24.250325 coreos-metadata[916]: Sep 04 17:29:24.250 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 4 17:29:24.256878 coreos-metadata[916]: Sep 04 17:29:24.256 INFO Fetch successful Sep 4 17:29:24.259645 coreos-metadata[916]: Sep 04 17:29:24.257 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 4 17:29:24.277392 coreos-metadata[916]: Sep 04 17:29:24.277 INFO Fetch successful Sep 4 17:29:24.290456 coreos-metadata[916]: Sep 04 17:29:24.290 INFO wrote hostname ci-3975.2.1-a-eeaffe6a3f to /sysroot/etc/hostname Sep 4 17:29:24.296805 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 17:29:24.949470 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:29:24.960341 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:29:24.967416 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:29:24.977027 kernel: BTRFS info (device sda6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:29:24.976448 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:29:25.005271 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:29:25.007913 ignition[1037]: INFO : Ignition 2.18.0 Sep 4 17:29:25.007913 ignition[1037]: INFO : Stage: mount Sep 4 17:29:25.011926 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:25.011926 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:29:25.013451 ignition[1037]: INFO : mount: mount passed Sep 4 17:29:25.013451 ignition[1037]: INFO : Ignition finished successfully Sep 4 17:29:25.024208 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:29:25.033336 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:29:25.048417 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:29:25.058247 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1049) Sep 4 17:29:25.058278 kernel: BTRFS info (device sda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:29:25.062243 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:29:25.066809 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:29:25.072253 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:29:25.073514 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:29:25.097091 ignition[1066]: INFO : Ignition 2.18.0 Sep 4 17:29:25.097091 ignition[1066]: INFO : Stage: files Sep 4 17:29:25.101323 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:25.101323 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:29:25.107943 ignition[1066]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:29:25.111606 ignition[1066]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:29:25.111606 ignition[1066]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:29:25.221735 ignition[1066]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:29:25.226695 ignition[1066]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:29:25.226695 ignition[1066]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:29:25.222217 unknown[1066]: wrote ssh authorized keys file for user: core Sep 4 17:29:25.244894 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 4 17:29:25.250034 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 4 17:29:25.254805 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:29:25.259857 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 17:29:25.447599 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 17:29:25.546600 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:29:25.546600 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:29:25.558052 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:29:25.558052 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:29:25.558052 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:29:25.558052 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:29:25.558052 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:29:25.558052 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:29:25.558052 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:29:25.558052 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:29:25.558052 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:29:25.558052 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:29:25.558052 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:29:25.558052 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:29:25.558052 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Sep 4 17:29:25.950993 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 17:29:26.262674 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:29:26.262674 ignition[1066]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 4 17:29:26.293991 ignition[1066]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 4 17:29:26.301554 ignition[1066]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 4 17:29:26.301554 ignition[1066]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 4 17:29:26.301554 ignition[1066]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 4 17:29:26.314168 ignition[1066]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:29:26.314168 ignition[1066]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:29:26.314168 ignition[1066]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 4 17:29:26.314168 ignition[1066]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:29:26.334117 ignition[1066]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:29:26.334117 ignition[1066]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:29:26.334117 ignition[1066]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:29:26.334117 ignition[1066]: INFO : files: files passed Sep 4 17:29:26.334117 ignition[1066]: INFO : Ignition finished successfully Sep 4 17:29:26.326844 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:29:26.348362 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:29:26.363423 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:29:26.367000 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:29:26.368516 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:29:26.381988 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:29:26.381988 initrd-setup-root-after-ignition[1095]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:29:26.393550 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:29:26.384615 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:29:26.401182 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:29:26.409409 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:29:26.442247 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:29:26.442347 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:29:26.449568 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:29:26.458450 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:29:26.461273 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:29:26.473629 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:29:26.486859 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:29:26.498418 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:29:26.510910 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:29:26.517241 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:29:26.523469 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:29:26.528219 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:29:26.531051 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:29:26.537597 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:29:26.543113 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:29:26.551560 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:29:26.557283 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:29:26.560493 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:29:26.569390 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:29:26.572268 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:29:26.578138 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:29:26.586816 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:29:26.592713 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:29:26.597130 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:29:26.598257 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:29:26.605743 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:29:26.608881 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:29:26.614950 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:29:26.617801 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:29:26.621393 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:29:26.628006 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:29:26.636129 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:29:26.636318 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:29:26.642638 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:29:26.642736 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:29:26.653651 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 4 17:29:26.653819 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 17:29:26.668028 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:29:26.673385 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:29:26.675605 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:29:26.675814 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:29:26.681917 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:29:26.684332 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:29:26.699304 ignition[1119]: INFO : Ignition 2.18.0 Sep 4 17:29:26.699304 ignition[1119]: INFO : Stage: umount Sep 4 17:29:26.699304 ignition[1119]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:26.699304 ignition[1119]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:29:26.699304 ignition[1119]: INFO : umount: umount passed Sep 4 17:29:26.699304 ignition[1119]: INFO : Ignition finished successfully Sep 4 17:29:26.699562 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:29:26.701491 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:29:26.722456 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:29:26.722572 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:29:26.728869 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:29:26.728970 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:29:26.734386 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:29:26.741014 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:29:26.744540 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 17:29:26.746927 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 17:29:26.754512 systemd[1]: Stopped target network.target - Network. Sep 4 17:29:26.756973 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:29:26.757022 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:29:26.762662 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:29:26.765161 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:29:26.765314 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:29:26.771278 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:29:26.773725 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:29:26.791921 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:29:26.791975 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:29:26.799431 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:29:26.799484 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:29:26.804574 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:29:26.804628 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:29:26.815243 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:29:26.815316 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:29:26.823483 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:29:26.829240 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:29:26.830279 systemd-networkd[871]: eth0: DHCPv6 lease lost Sep 4 17:29:26.837942 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:29:26.838525 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:29:26.838632 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:29:26.844287 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:29:26.844421 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:29:26.849963 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:29:26.850019 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:29:26.872532 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:29:26.878546 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:29:26.878611 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:29:26.888545 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:29:26.888601 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:29:26.893743 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:29:26.893790 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:29:26.896769 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:29:26.896816 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:29:26.913890 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:29:26.939788 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:29:26.939935 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:29:26.951331 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:29:26.951392 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:29:26.956953 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:29:26.978121 kernel: hv_netvsc 000d3a67-722e-000d-3a67-722e000d3a67 eth0: Data path switched from VF: enP59968s1 Sep 4 17:29:26.956995 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:29:26.957588 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:29:26.957629 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:29:26.958559 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:29:26.958596 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:29:26.959867 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:29:26.959903 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:29:26.975498 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:29:26.984089 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:29:26.984153 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:29:26.987556 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 17:29:26.987601 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:29:27.021566 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:29:27.021642 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:29:27.027452 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:29:27.027538 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:27.037028 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:29:27.037121 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:29:27.043890 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:29:27.044209 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:29:27.214124 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:29:27.214344 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:29:27.220307 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:29:27.225513 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:29:27.225571 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:29:27.237389 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:29:27.294896 systemd[1]: Switching root. Sep 4 17:29:27.327767 systemd-journald[176]: Journal stopped Sep 4 17:29:33.018917 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Sep 4 17:29:33.018964 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:29:33.018986 kernel: SELinux: policy capability open_perms=1 Sep 4 17:29:33.019001 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:29:33.019017 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:29:33.019033 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:29:33.019052 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:29:33.019075 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:29:33.019094 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:29:33.019114 kernel: audit: type=1403 audit(1725470969.606:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:29:33.019135 systemd[1]: Successfully loaded SELinux policy in 203.833ms. Sep 4 17:29:33.019157 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.087ms. Sep 4 17:29:33.019179 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:29:33.019198 systemd[1]: Detected virtualization microsoft. Sep 4 17:29:33.019224 systemd[1]: Detected architecture x86-64. Sep 4 17:29:33.020306 systemd[1]: Detected first boot. Sep 4 17:29:33.020324 systemd[1]: Hostname set to . Sep 4 17:29:33.020335 systemd[1]: Initializing machine ID from random generator. Sep 4 17:29:33.020345 zram_generator::config[1178]: No configuration found. Sep 4 17:29:33.020363 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:29:33.020374 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:29:33.020385 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 4 17:29:33.020395 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:29:33.020408 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:29:33.020417 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:29:33.020430 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:29:33.020443 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:29:33.020456 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:29:33.020466 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:29:33.020478 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:29:33.020488 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:29:33.020501 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:29:33.020511 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:29:33.020526 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:29:33.020537 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:29:33.020547 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:29:33.020557 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 17:29:33.020569 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:29:33.020579 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:29:33.020592 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:29:33.020608 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:29:33.020620 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:29:33.020634 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:29:33.020647 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:29:33.020660 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:29:33.020673 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:29:33.020683 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:29:33.020696 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:29:33.020706 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:29:33.020721 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:29:33.020732 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:29:33.020745 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:29:33.020755 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:29:33.020768 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:29:33.020781 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:29:33.020795 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:29:33.020805 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:29:33.020817 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:29:33.020828 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:29:33.020841 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:29:33.020851 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:29:33.020863 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:29:33.020877 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:29:33.020889 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:29:33.020900 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:29:33.020912 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:29:33.020924 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:29:33.020935 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:29:33.020947 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 4 17:29:33.020958 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 4 17:29:33.020973 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:29:33.020984 kernel: loop: module loaded Sep 4 17:29:33.020996 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:29:33.021006 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:29:33.021019 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:29:33.021029 kernel: fuse: init (API version 7.39) Sep 4 17:29:33.021042 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:29:33.021072 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:29:33.021087 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:29:33.021103 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:29:33.021114 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:29:33.021126 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:29:33.021137 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:29:33.021148 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:29:33.021160 kernel: ACPI: bus type drm_connector registered Sep 4 17:29:33.021192 systemd-journald[1273]: Collecting audit messages is disabled. Sep 4 17:29:33.021220 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:29:33.021241 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:29:33.021251 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:29:33.021265 systemd-journald[1273]: Journal started Sep 4 17:29:33.021291 systemd-journald[1273]: Runtime Journal (/run/log/journal/a3e45d7bb8084ab9b06d0e63f50e6df4) is 8.0M, max 158.8M, 150.8M free. Sep 4 17:29:33.025258 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:29:33.033209 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:29:33.033731 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:29:33.038013 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:29:33.041992 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:29:33.042286 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:29:33.046147 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:29:33.046494 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:29:33.050524 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:29:33.050853 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:29:33.055307 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:29:33.055632 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:29:33.059477 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:29:33.063773 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:29:33.084207 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:29:33.093524 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:29:33.102169 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:29:33.108153 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:29:33.182418 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:29:33.186796 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:29:33.190200 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:29:33.192217 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:29:33.195507 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:29:33.197074 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:29:33.202976 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:29:33.207037 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:29:33.210878 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:29:33.214375 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:29:33.227349 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:29:33.233398 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:29:33.246137 udevadm[1342]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 4 17:29:33.262846 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:29:33.269514 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:29:33.295340 systemd-journald[1273]: Time spent on flushing to /var/log/journal/a3e45d7bb8084ab9b06d0e63f50e6df4 is 16.788ms for 953 entries. Sep 4 17:29:33.295340 systemd-journald[1273]: System Journal (/var/log/journal/a3e45d7bb8084ab9b06d0e63f50e6df4) is 8.0M, max 2.6G, 2.6G free. Sep 4 17:29:33.324042 systemd-journald[1273]: Received client request to flush runtime journal. Sep 4 17:29:33.318799 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Sep 4 17:29:33.318821 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Sep 4 17:29:33.325585 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:29:33.330316 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:29:33.339396 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:29:33.431007 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:29:33.459455 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:29:33.469359 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:29:33.490533 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Sep 4 17:29:33.490575 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Sep 4 17:29:33.496657 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:29:34.561990 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:29:34.572411 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:29:34.596840 systemd-udevd[1367]: Using default interface naming scheme 'v255'. Sep 4 17:29:34.807117 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:29:34.820146 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:29:34.887260 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1376) Sep 4 17:29:34.927210 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Sep 4 17:29:34.972970 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 17:29:34.978763 kernel: hv_vmbus: registering driver hv_balloon Sep 4 17:29:34.985447 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 4 17:29:34.991513 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:29:35.033242 kernel: hv_vmbus: registering driver hyperv_fb Sep 4 17:29:35.075264 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 4 17:29:35.082302 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 4 17:29:35.089347 kernel: Console: switching to colour dummy device 80x25 Sep 4 17:29:35.142374 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 17:29:35.152562 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:35.173497 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:29:35.186602 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:29:35.186998 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:35.284770 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:35.299514 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:29:35.299800 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:35.316403 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:35.370253 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Sep 4 17:29:35.380642 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1369) Sep 4 17:29:35.500704 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 4 17:29:35.517920 systemd-networkd[1373]: lo: Link UP Sep 4 17:29:35.518162 systemd-networkd[1373]: lo: Gained carrier Sep 4 17:29:35.520199 systemd-networkd[1373]: Enumeration completed Sep 4 17:29:35.520432 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:29:35.520655 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:29:35.520661 systemd-networkd[1373]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:29:35.527464 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:29:35.533560 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:29:35.539956 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:29:35.580288 kernel: mlx5_core ea40:00:02.0 enP59968s1: Link up Sep 4 17:29:35.599261 kernel: hv_netvsc 000d3a67-722e-000d-3a67-722e000d3a67 eth0: Data path switched to VF: enP59968s1 Sep 4 17:29:35.600209 systemd-networkd[1373]: enP59968s1: Link UP Sep 4 17:29:35.600530 systemd-networkd[1373]: eth0: Link UP Sep 4 17:29:35.600540 systemd-networkd[1373]: eth0: Gained carrier Sep 4 17:29:35.600565 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:29:35.608799 systemd-networkd[1373]: enP59968s1: Gained carrier Sep 4 17:29:35.627193 lvm[1461]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:29:35.630298 systemd-networkd[1373]: eth0: DHCPv4 address 10.200.8.37/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 4 17:29:35.657521 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:29:35.661720 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:29:35.670379 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:29:35.677159 lvm[1464]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:29:35.704088 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:29:35.707959 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:29:35.711106 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:29:35.711141 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:29:35.713766 systemd[1]: Reached target machines.target - Containers. Sep 4 17:29:35.716980 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:29:35.725395 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:29:35.729305 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:29:35.729485 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:29:35.731364 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:29:35.737421 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:29:35.749411 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:29:35.754125 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:29:35.764614 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:29:35.794068 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:29:35.795024 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:29:35.921941 kernel: loop0: detected capacity change from 0 to 80568 Sep 4 17:29:35.922059 kernel: block loop0: the capability attribute has been deprecated. Sep 4 17:29:36.065925 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:36.349260 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:29:36.408263 kernel: loop1: detected capacity change from 0 to 139904 Sep 4 17:29:36.951255 kernel: loop2: detected capacity change from 0 to 56904 Sep 4 17:29:36.992640 systemd-networkd[1373]: enP59968s1: Gained IPv6LL Sep 4 17:29:37.248415 systemd-networkd[1373]: eth0: Gained IPv6LL Sep 4 17:29:37.255589 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:29:37.298246 kernel: loop3: detected capacity change from 0 to 209816 Sep 4 17:29:37.332248 kernel: loop4: detected capacity change from 0 to 80568 Sep 4 17:29:37.339251 kernel: loop5: detected capacity change from 0 to 139904 Sep 4 17:29:37.356252 kernel: loop6: detected capacity change from 0 to 56904 Sep 4 17:29:37.362258 kernel: loop7: detected capacity change from 0 to 209816 Sep 4 17:29:37.366036 (sd-merge)[1491]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 4 17:29:37.366598 (sd-merge)[1491]: Merged extensions into '/usr'. Sep 4 17:29:37.370512 systemd[1]: Reloading requested from client PID 1471 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:29:37.370528 systemd[1]: Reloading... Sep 4 17:29:37.418251 zram_generator::config[1513]: No configuration found. Sep 4 17:29:37.586770 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:29:37.664297 systemd[1]: Reloading finished in 293 ms. Sep 4 17:29:37.681060 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:29:37.694389 systemd[1]: Starting ensure-sysext.service... Sep 4 17:29:37.699383 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:29:37.707595 systemd[1]: Reloading requested from client PID 1581 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:29:37.707617 systemd[1]: Reloading... Sep 4 17:29:37.726913 systemd-tmpfiles[1582]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:29:37.727871 systemd-tmpfiles[1582]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:29:37.729377 systemd-tmpfiles[1582]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:29:37.729907 systemd-tmpfiles[1582]: ACLs are not supported, ignoring. Sep 4 17:29:37.730093 systemd-tmpfiles[1582]: ACLs are not supported, ignoring. Sep 4 17:29:37.752595 systemd-tmpfiles[1582]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:29:37.754390 systemd-tmpfiles[1582]: Skipping /boot Sep 4 17:29:37.767425 systemd-tmpfiles[1582]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:29:37.767681 systemd-tmpfiles[1582]: Skipping /boot Sep 4 17:29:37.786281 zram_generator::config[1609]: No configuration found. Sep 4 17:29:37.929482 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:29:38.015391 systemd[1]: Reloading finished in 307 ms. Sep 4 17:29:38.032668 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:29:38.053154 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:29:38.073571 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:29:38.080384 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:29:38.086754 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:29:38.092792 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:29:38.116450 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:29:38.116792 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:29:38.125581 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:29:38.141460 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:29:38.153604 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:29:38.160158 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:29:38.160748 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:29:38.166182 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:29:38.170515 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:29:38.178980 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:29:38.179175 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:29:38.191916 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:29:38.196208 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:29:38.199956 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:29:38.211872 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:29:38.212101 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:29:38.218546 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:29:38.227548 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:29:38.242993 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:29:38.245825 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:29:38.245990 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:29:38.256496 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:29:38.257815 systemd-resolved[1684]: Positive Trust Anchors: Sep 4 17:29:38.257827 systemd-resolved[1684]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:29:38.257864 systemd-resolved[1684]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:29:38.261584 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:29:38.261885 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:29:38.266030 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:29:38.266222 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:29:38.270390 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:29:38.270588 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:29:38.280657 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:29:38.281042 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:29:38.285500 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:29:38.290563 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:29:38.299070 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:29:38.307550 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:29:38.310564 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:29:38.310909 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:29:38.315837 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:29:38.318151 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:29:38.318791 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:29:38.322632 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:29:38.322925 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:29:38.326712 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:29:38.326983 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:29:38.331119 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:29:38.331394 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:29:38.340127 systemd[1]: Finished ensure-sysext.service. Sep 4 17:29:38.347871 systemd-resolved[1684]: Using system hostname 'ci-3975.2.1-a-eeaffe6a3f'. Sep 4 17:29:38.348878 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:29:38.348943 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:29:38.351152 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:29:38.353942 systemd[1]: Reached target network.target - Network. Sep 4 17:29:38.356180 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:29:38.358832 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:29:38.386935 augenrules[1739]: No rules Sep 4 17:29:38.388989 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:29:38.676581 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:29:38.682469 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:29:42.492358 ldconfig[1468]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:29:42.502585 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:29:42.514385 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:29:42.524149 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:29:42.527647 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:29:42.530615 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:29:42.534073 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:29:42.537680 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:29:42.540719 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:29:42.544568 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:29:42.547919 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:29:42.547977 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:29:42.550352 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:29:42.553854 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:29:42.558281 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:29:42.562127 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:29:42.567041 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:29:42.569928 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:29:42.572628 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:29:42.575357 systemd[1]: System is tainted: cgroupsv1 Sep 4 17:29:42.575405 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:29:42.575445 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:29:42.579418 systemd[1]: Starting chronyd.service - NTP client/server... Sep 4 17:29:42.585338 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:29:42.590404 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 17:29:42.596057 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:29:42.607343 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:29:42.619353 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:29:42.622223 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:29:42.624930 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:29:42.633335 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:29:42.647358 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:29:42.658427 jq[1759]: false Sep 4 17:29:42.656370 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:29:42.673968 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:29:42.679550 (chronyd)[1755]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Sep 4 17:29:42.680028 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:29:42.692358 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:29:42.699778 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:29:42.709364 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:29:42.726275 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:29:42.729767 chronyd[1788]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Sep 4 17:29:42.732489 chronyd[1788]: Timezone right/UTC failed leap second check, ignoring Sep 4 17:29:42.732709 chronyd[1788]: Loaded seccomp filter (level 2) Sep 4 17:29:42.742554 extend-filesystems[1761]: Found loop4 Sep 4 17:29:42.742554 extend-filesystems[1761]: Found loop5 Sep 4 17:29:42.757406 extend-filesystems[1761]: Found loop6 Sep 4 17:29:42.757406 extend-filesystems[1761]: Found loop7 Sep 4 17:29:42.757406 extend-filesystems[1761]: Found sda Sep 4 17:29:42.757406 extend-filesystems[1761]: Found sda1 Sep 4 17:29:42.757406 extend-filesystems[1761]: Found sda2 Sep 4 17:29:42.757406 extend-filesystems[1761]: Found sda3 Sep 4 17:29:42.757406 extend-filesystems[1761]: Found usr Sep 4 17:29:42.757406 extend-filesystems[1761]: Found sda4 Sep 4 17:29:42.757406 extend-filesystems[1761]: Found sda6 Sep 4 17:29:42.757406 extend-filesystems[1761]: Found sda7 Sep 4 17:29:42.757406 extend-filesystems[1761]: Found sda9 Sep 4 17:29:42.757406 extend-filesystems[1761]: Checking size of /dev/sda9 Sep 4 17:29:42.747951 systemd[1]: Started chronyd.service - NTP client/server. Sep 4 17:29:42.751042 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:29:42.751371 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:29:42.807030 jq[1786]: true Sep 4 17:29:42.757021 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:29:42.757300 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:29:42.791809 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:29:42.796645 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:29:42.796936 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:29:42.818141 (ntainerd)[1805]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:29:42.841342 extend-filesystems[1761]: Old size kept for /dev/sda9 Sep 4 17:29:42.843899 extend-filesystems[1761]: Found sr0 Sep 4 17:29:42.845874 jq[1804]: true Sep 4 17:29:42.856629 update_engine[1784]: I0904 17:29:42.856243 1784 main.cc:92] Flatcar Update Engine starting Sep 4 17:29:42.860156 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:29:42.863040 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:29:42.895754 systemd-logind[1777]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 17:29:42.898653 systemd-logind[1777]: New seat seat0. Sep 4 17:29:42.899412 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:29:42.922452 dbus-daemon[1758]: [system] SELinux support is enabled Sep 4 17:29:42.922978 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:29:42.926794 update_engine[1784]: I0904 17:29:42.926748 1784 update_check_scheduler.cc:74] Next update check in 6m10s Sep 4 17:29:42.934298 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:29:42.934331 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:29:42.945137 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:29:42.945168 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:29:42.952893 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:29:42.960580 dbus-daemon[1758]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 17:29:42.960998 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:29:42.969399 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:29:42.979042 tar[1801]: linux-amd64/helm Sep 4 17:29:43.063652 coreos-metadata[1757]: Sep 04 17:29:43.057 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 4 17:29:43.063652 coreos-metadata[1757]: Sep 04 17:29:43.062 INFO Fetch successful Sep 4 17:29:43.063652 coreos-metadata[1757]: Sep 04 17:29:43.063 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 4 17:29:43.065209 bash[1851]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:29:43.067064 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:29:43.079693 coreos-metadata[1757]: Sep 04 17:29:43.079 INFO Fetch successful Sep 4 17:29:43.080954 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 17:29:43.081665 coreos-metadata[1757]: Sep 04 17:29:43.081 INFO Fetching http://168.63.129.16/machine/55be653b-e3f6-4baa-8ba7-2cd0016acf9b/609276f5%2D3ce5%2D466c%2Dbaf9%2D47d0ef51b093.%5Fci%2D3975.2.1%2Da%2Deeaffe6a3f?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 4 17:29:43.083816 coreos-metadata[1757]: Sep 04 17:29:43.083 INFO Fetch successful Sep 4 17:29:43.085272 coreos-metadata[1757]: Sep 04 17:29:43.084 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 4 17:29:43.101244 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1844) Sep 4 17:29:43.102223 coreos-metadata[1757]: Sep 04 17:29:43.102 INFO Fetch successful Sep 4 17:29:43.172696 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 17:29:43.176879 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:29:43.396098 locksmithd[1850]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:29:43.706419 sshd_keygen[1796]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:29:43.800132 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:29:43.815032 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:29:43.830677 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 4 17:29:43.861154 tar[1801]: linux-amd64/LICENSE Sep 4 17:29:43.861338 tar[1801]: linux-amd64/README.md Sep 4 17:29:43.886146 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:29:43.888505 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:29:43.903644 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 4 17:29:43.907414 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:29:43.919627 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:29:43.964786 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:29:43.973085 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:29:43.982863 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 17:29:43.986810 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:29:44.030610 containerd[1805]: time="2024-09-04T17:29:44.029575600Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Sep 4 17:29:44.059820 containerd[1805]: time="2024-09-04T17:29:44.059779500Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:29:44.059820 containerd[1805]: time="2024-09-04T17:29:44.059822200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:29:44.061486 containerd[1805]: time="2024-09-04T17:29:44.061442700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:29:44.061688 containerd[1805]: time="2024-09-04T17:29:44.061581500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:29:44.062528 containerd[1805]: time="2024-09-04T17:29:44.061932000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:29:44.062528 containerd[1805]: time="2024-09-04T17:29:44.061963500Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:29:44.062528 containerd[1805]: time="2024-09-04T17:29:44.062066500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:29:44.062528 containerd[1805]: time="2024-09-04T17:29:44.062124700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:29:44.062528 containerd[1805]: time="2024-09-04T17:29:44.062141800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:29:44.062528 containerd[1805]: time="2024-09-04T17:29:44.062217800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:29:44.062528 containerd[1805]: time="2024-09-04T17:29:44.062475700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:29:44.062528 containerd[1805]: time="2024-09-04T17:29:44.062500400Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 4 17:29:44.062528 containerd[1805]: time="2024-09-04T17:29:44.062514600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:29:44.062856 containerd[1805]: time="2024-09-04T17:29:44.062721100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:29:44.062856 containerd[1805]: time="2024-09-04T17:29:44.062742600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:29:44.062856 containerd[1805]: time="2024-09-04T17:29:44.062809700Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 4 17:29:44.062856 containerd[1805]: time="2024-09-04T17:29:44.062826500Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:29:44.080761 containerd[1805]: time="2024-09-04T17:29:44.080712200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:29:44.080761 containerd[1805]: time="2024-09-04T17:29:44.080756000Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:29:44.083342 containerd[1805]: time="2024-09-04T17:29:44.080774900Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:29:44.083342 containerd[1805]: time="2024-09-04T17:29:44.080834900Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:29:44.083342 containerd[1805]: time="2024-09-04T17:29:44.080864700Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:29:44.083342 containerd[1805]: time="2024-09-04T17:29:44.080876000Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:29:44.083342 containerd[1805]: time="2024-09-04T17:29:44.080887300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:29:44.083342 containerd[1805]: time="2024-09-04T17:29:44.081001300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:29:44.083342 containerd[1805]: time="2024-09-04T17:29:44.081014000Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:29:44.083342 containerd[1805]: time="2024-09-04T17:29:44.081025900Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:29:44.083342 containerd[1805]: time="2024-09-04T17:29:44.081038000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:29:44.083342 containerd[1805]: time="2024-09-04T17:29:44.081051400Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:29:44.083342 containerd[1805]: time="2024-09-04T17:29:44.081067100Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:29:44.083342 containerd[1805]: time="2024-09-04T17:29:44.081079200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:29:44.083342 containerd[1805]: time="2024-09-04T17:29:44.081090200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:29:44.083342 containerd[1805]: time="2024-09-04T17:29:44.081103400Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:29:44.083790 containerd[1805]: time="2024-09-04T17:29:44.081120400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:29:44.083790 containerd[1805]: time="2024-09-04T17:29:44.081138400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:29:44.083790 containerd[1805]: time="2024-09-04T17:29:44.081189900Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:29:44.083790 containerd[1805]: time="2024-09-04T17:29:44.081343200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:29:44.083790 containerd[1805]: time="2024-09-04T17:29:44.081798600Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:29:44.083790 containerd[1805]: time="2024-09-04T17:29:44.081838700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:29:44.083790 containerd[1805]: time="2024-09-04T17:29:44.081857400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:29:44.083790 containerd[1805]: time="2024-09-04T17:29:44.081888300Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:29:44.083790 containerd[1805]: time="2024-09-04T17:29:44.081947700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:29:44.083790 containerd[1805]: time="2024-09-04T17:29:44.081965600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:29:44.083790 containerd[1805]: time="2024-09-04T17:29:44.081981400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:29:44.083790 containerd[1805]: time="2024-09-04T17:29:44.081999200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:29:44.083790 containerd[1805]: time="2024-09-04T17:29:44.082017700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:29:44.083790 containerd[1805]: time="2024-09-04T17:29:44.082045300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:29:44.084268 containerd[1805]: time="2024-09-04T17:29:44.082064200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:29:44.084268 containerd[1805]: time="2024-09-04T17:29:44.082080300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:29:44.084268 containerd[1805]: time="2024-09-04T17:29:44.082098100Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:29:44.084268 containerd[1805]: time="2024-09-04T17:29:44.082279400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:29:44.084268 containerd[1805]: time="2024-09-04T17:29:44.082307000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:29:44.084268 containerd[1805]: time="2024-09-04T17:29:44.082325300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:29:44.084268 containerd[1805]: time="2024-09-04T17:29:44.082343700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:29:44.084268 containerd[1805]: time="2024-09-04T17:29:44.082362200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:29:44.084268 containerd[1805]: time="2024-09-04T17:29:44.082393400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:29:44.084268 containerd[1805]: time="2024-09-04T17:29:44.082413400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:29:44.084268 containerd[1805]: time="2024-09-04T17:29:44.082429100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:29:44.084637 containerd[1805]: time="2024-09-04T17:29:44.082770900Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:29:44.084637 containerd[1805]: time="2024-09-04T17:29:44.082886300Z" level=info msg="Connect containerd service" Sep 4 17:29:44.084637 containerd[1805]: time="2024-09-04T17:29:44.082930400Z" level=info msg="using legacy CRI server" Sep 4 17:29:44.084637 containerd[1805]: time="2024-09-04T17:29:44.082940700Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:29:44.084637 containerd[1805]: time="2024-09-04T17:29:44.083052900Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:29:44.084637 containerd[1805]: time="2024-09-04T17:29:44.083759300Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:29:44.084637 containerd[1805]: time="2024-09-04T17:29:44.083799400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:29:44.084637 containerd[1805]: time="2024-09-04T17:29:44.083838700Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:29:44.084637 containerd[1805]: time="2024-09-04T17:29:44.083855500Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:29:44.084637 containerd[1805]: time="2024-09-04T17:29:44.083874300Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:29:44.084637 containerd[1805]: time="2024-09-04T17:29:44.084212200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:29:44.084637 containerd[1805]: time="2024-09-04T17:29:44.084289900Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:29:44.086693 containerd[1805]: time="2024-09-04T17:29:44.085197400Z" level=info msg="Start subscribing containerd event" Sep 4 17:29:44.086693 containerd[1805]: time="2024-09-04T17:29:44.085275000Z" level=info msg="Start recovering state" Sep 4 17:29:44.086693 containerd[1805]: time="2024-09-04T17:29:44.085349400Z" level=info msg="Start event monitor" Sep 4 17:29:44.086693 containerd[1805]: time="2024-09-04T17:29:44.085363600Z" level=info msg="Start snapshots syncer" Sep 4 17:29:44.086693 containerd[1805]: time="2024-09-04T17:29:44.085374100Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:29:44.086693 containerd[1805]: time="2024-09-04T17:29:44.085384700Z" level=info msg="Start streaming server" Sep 4 17:29:44.086693 containerd[1805]: time="2024-09-04T17:29:44.085450200Z" level=info msg="containerd successfully booted in 0.058792s" Sep 4 17:29:44.086452 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:29:44.323976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:29:44.325004 (kubelet)[1943]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:29:44.328592 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:29:44.332192 systemd[1]: Startup finished in 812ms (firmware) + 35.270s (loader) + 13.409s (kernel) + 14.927s (userspace) = 1min 4.419s. Sep 4 17:29:44.651642 login[1929]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 4 17:29:44.653876 login[1930]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 4 17:29:44.664710 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:29:44.673594 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:29:44.677009 systemd-logind[1777]: New session 1 of user core. Sep 4 17:29:44.684398 systemd-logind[1777]: New session 2 of user core. Sep 4 17:29:44.699576 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:29:44.714538 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:29:44.719846 (systemd)[1956]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:29:44.964800 systemd[1956]: Queued start job for default target default.target. Sep 4 17:29:44.965383 systemd[1956]: Created slice app.slice - User Application Slice. Sep 4 17:29:44.965414 systemd[1956]: Reached target paths.target - Paths. Sep 4 17:29:44.965431 systemd[1956]: Reached target timers.target - Timers. Sep 4 17:29:44.972057 systemd[1956]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:29:44.981628 systemd[1956]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:29:44.982433 systemd[1956]: Reached target sockets.target - Sockets. Sep 4 17:29:44.982454 systemd[1956]: Reached target basic.target - Basic System. Sep 4 17:29:44.982517 systemd[1956]: Reached target default.target - Main User Target. Sep 4 17:29:44.982550 systemd[1956]: Startup finished in 255ms. Sep 4 17:29:44.983311 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:29:44.991085 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:29:44.993124 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:29:45.171269 kubelet[1943]: E0904 17:29:45.170853 1943 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:29:45.174439 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:29:45.174653 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:29:45.294727 waagent[1921]: 2024-09-04T17:29:45.294567Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Sep 4 17:29:45.297899 waagent[1921]: 2024-09-04T17:29:45.297829Z INFO Daemon Daemon OS: flatcar 3975.2.1 Sep 4 17:29:45.300482 waagent[1921]: 2024-09-04T17:29:45.300427Z INFO Daemon Daemon Python: 3.11.9 Sep 4 17:29:45.303021 waagent[1921]: 2024-09-04T17:29:45.302955Z INFO Daemon Daemon Run daemon Sep 4 17:29:45.305438 waagent[1921]: 2024-09-04T17:29:45.305388Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3975.2.1' Sep 4 17:29:45.334721 waagent[1921]: 2024-09-04T17:29:45.309880Z INFO Daemon Daemon Using waagent for provisioning Sep 4 17:29:45.334721 waagent[1921]: 2024-09-04T17:29:45.310640Z INFO Daemon Daemon Activate resource disk Sep 4 17:29:45.334721 waagent[1921]: 2024-09-04T17:29:45.310986Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 4 17:29:45.334721 waagent[1921]: 2024-09-04T17:29:45.315325Z INFO Daemon Daemon Found device: None Sep 4 17:29:45.334721 waagent[1921]: 2024-09-04T17:29:45.315504Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 4 17:29:45.334721 waagent[1921]: 2024-09-04T17:29:45.315949Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 4 17:29:45.334721 waagent[1921]: 2024-09-04T17:29:45.318333Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 4 17:29:45.334721 waagent[1921]: 2024-09-04T17:29:45.319278Z INFO Daemon Daemon Running default provisioning handler Sep 4 17:29:45.338039 waagent[1921]: 2024-09-04T17:29:45.337757Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 4 17:29:45.344709 waagent[1921]: 2024-09-04T17:29:45.344660Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 4 17:29:45.349592 waagent[1921]: 2024-09-04T17:29:45.349543Z INFO Daemon Daemon cloud-init is enabled: False Sep 4 17:29:45.354433 waagent[1921]: 2024-09-04T17:29:45.352225Z INFO Daemon Daemon Copying ovf-env.xml Sep 4 17:29:45.427580 waagent[1921]: 2024-09-04T17:29:45.427389Z INFO Daemon Daemon Successfully mounted dvd Sep 4 17:29:45.442361 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 4 17:29:45.444447 waagent[1921]: 2024-09-04T17:29:45.444391Z INFO Daemon Daemon Detect protocol endpoint Sep 4 17:29:45.468532 waagent[1921]: 2024-09-04T17:29:45.444662Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 4 17:29:45.468532 waagent[1921]: 2024-09-04T17:29:45.445260Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 4 17:29:45.468532 waagent[1921]: 2024-09-04T17:29:45.446277Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 4 17:29:45.468532 waagent[1921]: 2024-09-04T17:29:45.446896Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 4 17:29:45.468532 waagent[1921]: 2024-09-04T17:29:45.447785Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 4 17:29:45.468532 waagent[1921]: 2024-09-04T17:29:45.456962Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 4 17:29:45.468532 waagent[1921]: 2024-09-04T17:29:45.458037Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 4 17:29:45.468532 waagent[1921]: 2024-09-04T17:29:45.458817Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 4 17:29:45.538013 waagent[1921]: 2024-09-04T17:29:45.537927Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 4 17:29:45.545332 waagent[1921]: 2024-09-04T17:29:45.542011Z INFO Daemon Daemon Forcing an update of the goal state. Sep 4 17:29:45.545500 waagent[1921]: 2024-09-04T17:29:45.545446Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 4 17:29:45.586775 waagent[1921]: 2024-09-04T17:29:45.586713Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.154 Sep 4 17:29:45.604529 waagent[1921]: 2024-09-04T17:29:45.587487Z INFO Daemon Sep 4 17:29:45.604529 waagent[1921]: 2024-09-04T17:29:45.587793Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 244e8be2-88a5-48fd-8818-768be61d3bdb eTag: 9691436068624443429 source: Fabric] Sep 4 17:29:45.604529 waagent[1921]: 2024-09-04T17:29:45.589153Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 4 17:29:45.604529 waagent[1921]: 2024-09-04T17:29:45.589986Z INFO Daemon Sep 4 17:29:45.604529 waagent[1921]: 2024-09-04T17:29:45.590963Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 4 17:29:45.607093 waagent[1921]: 2024-09-04T17:29:45.607048Z INFO Daemon Daemon Downloading artifacts profile blob Sep 4 17:29:45.674953 waagent[1921]: 2024-09-04T17:29:45.674879Z INFO Daemon Downloaded certificate {'thumbprint': 'FE1D5FA92005FB3EA684AC0750E4A60A96256D2F', 'hasPrivateKey': True} Sep 4 17:29:45.680547 waagent[1921]: 2024-09-04T17:29:45.680489Z INFO Daemon Downloaded certificate {'thumbprint': '096CA692BEEA3327340096A52AF88AE222850681', 'hasPrivateKey': False} Sep 4 17:29:45.687675 waagent[1921]: 2024-09-04T17:29:45.681021Z INFO Daemon Fetch goal state completed Sep 4 17:29:45.693130 waagent[1921]: 2024-09-04T17:29:45.693080Z INFO Daemon Daemon Starting provisioning Sep 4 17:29:45.700260 waagent[1921]: 2024-09-04T17:29:45.693292Z INFO Daemon Daemon Handle ovf-env.xml. Sep 4 17:29:45.700260 waagent[1921]: 2024-09-04T17:29:45.693750Z INFO Daemon Daemon Set hostname [ci-3975.2.1-a-eeaffe6a3f] Sep 4 17:29:45.710892 waagent[1921]: 2024-09-04T17:29:45.710835Z INFO Daemon Daemon Publish hostname [ci-3975.2.1-a-eeaffe6a3f] Sep 4 17:29:45.718982 waagent[1921]: 2024-09-04T17:29:45.711195Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 4 17:29:45.718982 waagent[1921]: 2024-09-04T17:29:45.712193Z INFO Daemon Daemon Primary interface is [eth0] Sep 4 17:29:45.742665 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:29:45.742674 systemd-networkd[1373]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:29:45.742716 systemd-networkd[1373]: eth0: DHCP lease lost Sep 4 17:29:45.743967 waagent[1921]: 2024-09-04T17:29:45.743906Z INFO Daemon Daemon Create user account if not exists Sep 4 17:29:45.761673 waagent[1921]: 2024-09-04T17:29:45.744184Z INFO Daemon Daemon User core already exists, skip useradd Sep 4 17:29:45.761673 waagent[1921]: 2024-09-04T17:29:45.744725Z INFO Daemon Daemon Configure sudoer Sep 4 17:29:45.761673 waagent[1921]: 2024-09-04T17:29:45.745449Z INFO Daemon Daemon Configure sshd Sep 4 17:29:45.761673 waagent[1921]: 2024-09-04T17:29:45.745890Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 4 17:29:45.761673 waagent[1921]: 2024-09-04T17:29:45.746919Z INFO Daemon Daemon Deploy ssh public key. Sep 4 17:29:45.762320 systemd-networkd[1373]: eth0: DHCPv6 lease lost Sep 4 17:29:45.783270 systemd-networkd[1373]: eth0: DHCPv4 address 10.200.8.37/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 4 17:29:47.059507 waagent[1921]: 2024-09-04T17:29:47.059431Z INFO Daemon Daemon Provisioning complete Sep 4 17:29:47.073998 waagent[1921]: 2024-09-04T17:29:47.073944Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 4 17:29:47.081440 waagent[1921]: 2024-09-04T17:29:47.074205Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 4 17:29:47.081440 waagent[1921]: 2024-09-04T17:29:47.075187Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Sep 4 17:29:47.197913 waagent[2012]: 2024-09-04T17:29:47.197826Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Sep 4 17:29:47.198348 waagent[2012]: 2024-09-04T17:29:47.197969Z INFO ExtHandler ExtHandler OS: flatcar 3975.2.1 Sep 4 17:29:47.198348 waagent[2012]: 2024-09-04T17:29:47.198051Z INFO ExtHandler ExtHandler Python: 3.11.9 Sep 4 17:29:47.223563 waagent[2012]: 2024-09-04T17:29:47.223476Z INFO ExtHandler ExtHandler Distro: flatcar-3975.2.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 4 17:29:47.223790 waagent[2012]: 2024-09-04T17:29:47.223736Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 17:29:47.223887 waagent[2012]: 2024-09-04T17:29:47.223845Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 17:29:47.231547 waagent[2012]: 2024-09-04T17:29:47.231478Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 4 17:29:47.237033 waagent[2012]: 2024-09-04T17:29:47.236990Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.154 Sep 4 17:29:47.237490 waagent[2012]: 2024-09-04T17:29:47.237440Z INFO ExtHandler Sep 4 17:29:47.237565 waagent[2012]: 2024-09-04T17:29:47.237532Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 36c2ced9-0bcb-4df5-88a8-dc3ed7e37118 eTag: 9691436068624443429 source: Fabric] Sep 4 17:29:47.237877 waagent[2012]: 2024-09-04T17:29:47.237830Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 4 17:29:47.238430 waagent[2012]: 2024-09-04T17:29:47.238379Z INFO ExtHandler Sep 4 17:29:47.238512 waagent[2012]: 2024-09-04T17:29:47.238464Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 4 17:29:47.241813 waagent[2012]: 2024-09-04T17:29:47.241770Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 4 17:29:47.314164 waagent[2012]: 2024-09-04T17:29:47.314039Z INFO ExtHandler Downloaded certificate {'thumbprint': 'FE1D5FA92005FB3EA684AC0750E4A60A96256D2F', 'hasPrivateKey': True} Sep 4 17:29:47.314542 waagent[2012]: 2024-09-04T17:29:47.314491Z INFO ExtHandler Downloaded certificate {'thumbprint': '096CA692BEEA3327340096A52AF88AE222850681', 'hasPrivateKey': False} Sep 4 17:29:47.314985 waagent[2012]: 2024-09-04T17:29:47.314916Z INFO ExtHandler Fetch goal state completed Sep 4 17:29:47.330375 waagent[2012]: 2024-09-04T17:29:47.330316Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2012 Sep 4 17:29:47.330526 waagent[2012]: 2024-09-04T17:29:47.330478Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 4 17:29:47.332020 waagent[2012]: 2024-09-04T17:29:47.331962Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3975.2.1', '', 'Flatcar Container Linux by Kinvolk'] Sep 4 17:29:47.332403 waagent[2012]: 2024-09-04T17:29:47.332355Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 4 17:29:47.385692 waagent[2012]: 2024-09-04T17:29:47.385643Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 4 17:29:47.385906 waagent[2012]: 2024-09-04T17:29:47.385853Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 4 17:29:47.392567 waagent[2012]: 2024-09-04T17:29:47.392416Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 4 17:29:47.399139 systemd[1]: Reloading requested from client PID 2027 ('systemctl') (unit waagent.service)... Sep 4 17:29:47.399155 systemd[1]: Reloading... Sep 4 17:29:47.472262 zram_generator::config[2055]: No configuration found. Sep 4 17:29:47.602186 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:29:47.681914 systemd[1]: Reloading finished in 282 ms. Sep 4 17:29:47.705108 waagent[2012]: 2024-09-04T17:29:47.704624Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Sep 4 17:29:47.712497 systemd[1]: Reloading requested from client PID 2120 ('systemctl') (unit waagent.service)... Sep 4 17:29:47.712513 systemd[1]: Reloading... Sep 4 17:29:47.775279 zram_generator::config[2148]: No configuration found. Sep 4 17:29:47.913935 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:29:47.992981 systemd[1]: Reloading finished in 280 ms. Sep 4 17:29:48.018261 waagent[2012]: 2024-09-04T17:29:48.017600Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 4 17:29:48.018261 waagent[2012]: 2024-09-04T17:29:48.017805Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 4 17:29:48.559833 waagent[2012]: 2024-09-04T17:29:48.559735Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 4 17:29:48.560670 waagent[2012]: 2024-09-04T17:29:48.560596Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Sep 4 17:29:48.561589 waagent[2012]: 2024-09-04T17:29:48.561510Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 4 17:29:48.562789 waagent[2012]: 2024-09-04T17:29:48.562660Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 17:29:48.562789 waagent[2012]: 2024-09-04T17:29:48.562721Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 4 17:29:48.563111 waagent[2012]: 2024-09-04T17:29:48.563000Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 17:29:48.563202 waagent[2012]: 2024-09-04T17:29:48.563080Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 17:29:48.563752 waagent[2012]: 2024-09-04T17:29:48.563696Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 4 17:29:48.564004 waagent[2012]: 2024-09-04T17:29:48.563911Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 17:29:48.564004 waagent[2012]: 2024-09-04T17:29:48.563957Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 4 17:29:48.564375 waagent[2012]: 2024-09-04T17:29:48.564326Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 4 17:29:48.564887 waagent[2012]: 2024-09-04T17:29:48.564830Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 4 17:29:48.565218 waagent[2012]: 2024-09-04T17:29:48.565142Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 4 17:29:48.565218 waagent[2012]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 4 17:29:48.565218 waagent[2012]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Sep 4 17:29:48.565218 waagent[2012]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 4 17:29:48.565218 waagent[2012]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 4 17:29:48.565218 waagent[2012]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 4 17:29:48.565218 waagent[2012]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 4 17:29:48.565645 waagent[2012]: 2024-09-04T17:29:48.565598Z INFO EnvHandler ExtHandler Configure routes Sep 4 17:29:48.565755 waagent[2012]: 2024-09-04T17:29:48.565692Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 4 17:29:48.565962 waagent[2012]: 2024-09-04T17:29:48.565921Z INFO EnvHandler ExtHandler Gateway:None Sep 4 17:29:48.566028 waagent[2012]: 2024-09-04T17:29:48.566001Z INFO EnvHandler ExtHandler Routes:None Sep 4 17:29:48.566465 waagent[2012]: 2024-09-04T17:29:48.566425Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 4 17:29:48.572419 waagent[2012]: 2024-09-04T17:29:48.572373Z INFO ExtHandler ExtHandler Sep 4 17:29:48.572856 waagent[2012]: 2024-09-04T17:29:48.572801Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 418401af-c7a0-40f2-8f02-1d24d7549d9d correlation 25462344-c67a-4f1c-840a-a2fa51f83d78 created: 2024-09-04T17:28:29.875404Z] Sep 4 17:29:48.574143 waagent[2012]: 2024-09-04T17:29:48.574088Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 4 17:29:48.574757 waagent[2012]: 2024-09-04T17:29:48.574711Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Sep 4 17:29:48.639127 waagent[2012]: 2024-09-04T17:29:48.638977Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 28AC2DF4-EAB2-4B8B-A9E5-4C0685A18253;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Sep 4 17:29:48.671342 waagent[2012]: 2024-09-04T17:29:48.671275Z INFO MonitorHandler ExtHandler Network interfaces: Sep 4 17:29:48.671342 waagent[2012]: Executing ['ip', '-a', '-o', 'link']: Sep 4 17:29:48.671342 waagent[2012]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 4 17:29:48.671342 waagent[2012]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:67:72:2e brd ff:ff:ff:ff:ff:ff Sep 4 17:29:48.671342 waagent[2012]: 3: enP59968s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:67:72:2e brd ff:ff:ff:ff:ff:ff\ altname enP59968p0s2 Sep 4 17:29:48.671342 waagent[2012]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 4 17:29:48.671342 waagent[2012]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 4 17:29:48.671342 waagent[2012]: 2: eth0 inet 10.200.8.37/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 4 17:29:48.671342 waagent[2012]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 4 17:29:48.671342 waagent[2012]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 4 17:29:48.671342 waagent[2012]: 2: eth0 inet6 fe80::20d:3aff:fe67:722e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 4 17:29:48.671342 waagent[2012]: 3: enP59968s1 inet6 fe80::20d:3aff:fe67:722e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 4 17:29:48.734136 waagent[2012]: 2024-09-04T17:29:48.734071Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Sep 4 17:29:48.734136 waagent[2012]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:29:48.734136 waagent[2012]: pkts bytes target prot opt in out source destination Sep 4 17:29:48.734136 waagent[2012]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:29:48.734136 waagent[2012]: pkts bytes target prot opt in out source destination Sep 4 17:29:48.734136 waagent[2012]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:29:48.734136 waagent[2012]: pkts bytes target prot opt in out source destination Sep 4 17:29:48.734136 waagent[2012]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 4 17:29:48.734136 waagent[2012]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 4 17:29:48.734136 waagent[2012]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 4 17:29:48.737340 waagent[2012]: 2024-09-04T17:29:48.737271Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 4 17:29:48.737340 waagent[2012]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:29:48.737340 waagent[2012]: pkts bytes target prot opt in out source destination Sep 4 17:29:48.737340 waagent[2012]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:29:48.737340 waagent[2012]: pkts bytes target prot opt in out source destination Sep 4 17:29:48.737340 waagent[2012]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:29:48.737340 waagent[2012]: pkts bytes target prot opt in out source destination Sep 4 17:29:48.737340 waagent[2012]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 4 17:29:48.737340 waagent[2012]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 4 17:29:48.737340 waagent[2012]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 4 17:29:48.737732 waagent[2012]: 2024-09-04T17:29:48.737583Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 4 17:29:55.401280 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:29:55.407498 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:29:55.510410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:29:55.513806 (kubelet)[2256]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:29:56.123429 kubelet[2256]: E0904 17:29:56.123367 2256 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:29:56.127425 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:29:56.127730 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:30:06.151501 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:30:06.164437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:30:06.299407 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:30:06.302353 (kubelet)[2277]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:30:06.523820 chronyd[1788]: Selected source PHC0 Sep 4 17:30:06.851510 kubelet[2277]: E0904 17:30:06.851393 2277 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:30:06.854282 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:30:06.854625 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:30:16.901477 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 17:30:16.907778 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:30:17.008404 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:30:17.008709 (kubelet)[2298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:30:17.525843 kubelet[2298]: E0904 17:30:17.525760 2298 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:30:17.528919 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:30:17.529280 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:30:23.124994 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Sep 4 17:30:27.651616 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 4 17:30:27.663779 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:30:27.758409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:30:27.767584 (kubelet)[2322]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:30:28.000511 update_engine[1784]: I0904 17:30:28.000365 1784 update_attempter.cc:509] Updating boot flags... Sep 4 17:30:28.273370 kubelet[2322]: E0904 17:30:28.273210 2322 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:30:28.276016 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:30:28.276357 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:30:28.321580 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2342) Sep 4 17:30:28.430274 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2341) Sep 4 17:30:28.534414 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2341) Sep 4 17:30:37.036648 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:30:37.041512 systemd[1]: Started sshd@0-10.200.8.37:22-10.200.16.10:38286.service - OpenSSH per-connection server daemon (10.200.16.10:38286). Sep 4 17:30:37.898166 sshd[2424]: Accepted publickey for core from 10.200.16.10 port 38286 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:30:37.899929 sshd[2424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:37.905532 systemd-logind[1777]: New session 3 of user core. Sep 4 17:30:37.916657 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:30:38.401124 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 4 17:30:38.406856 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:30:38.442541 systemd[1]: Started sshd@1-10.200.8.37:22-10.200.16.10:38300.service - OpenSSH per-connection server daemon (10.200.16.10:38300). Sep 4 17:30:39.358765 sshd[2432]: Accepted publickey for core from 10.200.16.10 port 38300 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:30:39.359509 sshd[2432]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:39.371429 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:30:39.380731 systemd-logind[1777]: New session 4 of user core. Sep 4 17:30:39.381694 (kubelet)[2441]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:30:39.385190 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:30:39.427553 kubelet[2441]: E0904 17:30:39.427504 2441 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:30:39.430134 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:30:39.430496 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:30:39.738142 sshd[2432]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:39.742838 systemd[1]: sshd@1-10.200.8.37:22-10.200.16.10:38300.service: Deactivated successfully. Sep 4 17:30:39.747183 systemd-logind[1777]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:30:39.747834 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:30:39.749003 systemd-logind[1777]: Removed session 4. Sep 4 17:30:39.851845 systemd[1]: Started sshd@2-10.200.8.37:22-10.200.16.10:35108.service - OpenSSH per-connection server daemon (10.200.16.10:35108). Sep 4 17:30:40.475039 sshd[2458]: Accepted publickey for core from 10.200.16.10 port 35108 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:30:40.476741 sshd[2458]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:40.482064 systemd-logind[1777]: New session 5 of user core. Sep 4 17:30:40.491821 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:30:40.915474 sshd[2458]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:40.920150 systemd[1]: sshd@2-10.200.8.37:22-10.200.16.10:35108.service: Deactivated successfully. Sep 4 17:30:40.924313 systemd-logind[1777]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:30:40.924639 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:30:40.926156 systemd-logind[1777]: Removed session 5. Sep 4 17:30:41.028714 systemd[1]: Started sshd@3-10.200.8.37:22-10.200.16.10:35110.service - OpenSSH per-connection server daemon (10.200.16.10:35110). Sep 4 17:30:41.648552 sshd[2466]: Accepted publickey for core from 10.200.16.10 port 35110 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:30:41.650248 sshd[2466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:41.655755 systemd-logind[1777]: New session 6 of user core. Sep 4 17:30:41.662474 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:30:42.094754 sshd[2466]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:42.097895 systemd[1]: sshd@3-10.200.8.37:22-10.200.16.10:35110.service: Deactivated successfully. Sep 4 17:30:42.102882 systemd-logind[1777]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:30:42.103701 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:30:42.104655 systemd-logind[1777]: Removed session 6. Sep 4 17:30:42.202784 systemd[1]: Started sshd@4-10.200.8.37:22-10.200.16.10:35114.service - OpenSSH per-connection server daemon (10.200.16.10:35114). Sep 4 17:30:42.826883 sshd[2474]: Accepted publickey for core from 10.200.16.10 port 35114 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:30:42.828603 sshd[2474]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:42.832561 systemd-logind[1777]: New session 7 of user core. Sep 4 17:30:42.842506 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:30:43.601271 sudo[2478]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:30:43.601697 sudo[2478]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:30:43.640590 sudo[2478]: pam_unix(sudo:session): session closed for user root Sep 4 17:30:43.741937 sshd[2474]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:43.745447 systemd[1]: sshd@4-10.200.8.37:22-10.200.16.10:35114.service: Deactivated successfully. Sep 4 17:30:43.749290 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:30:43.749979 systemd-logind[1777]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:30:43.751359 systemd-logind[1777]: Removed session 7. Sep 4 17:30:43.853994 systemd[1]: Started sshd@5-10.200.8.37:22-10.200.16.10:35130.service - OpenSSH per-connection server daemon (10.200.16.10:35130). Sep 4 17:30:44.479481 sshd[2483]: Accepted publickey for core from 10.200.16.10 port 35130 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:30:44.481207 sshd[2483]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:44.486677 systemd-logind[1777]: New session 8 of user core. Sep 4 17:30:44.494504 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:30:44.825340 sudo[2488]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:30:44.825750 sudo[2488]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:30:44.829185 sudo[2488]: pam_unix(sudo:session): session closed for user root Sep 4 17:30:44.834117 sudo[2487]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:30:44.834475 sudo[2487]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:30:44.852607 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:30:44.854011 auditctl[2491]: No rules Sep 4 17:30:44.854458 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:30:44.854734 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:30:44.859615 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:30:44.884315 augenrules[2510]: No rules Sep 4 17:30:44.886133 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:30:44.887606 sudo[2487]: pam_unix(sudo:session): session closed for user root Sep 4 17:30:44.990897 sshd[2483]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:44.995104 systemd[1]: sshd@5-10.200.8.37:22-10.200.16.10:35130.service: Deactivated successfully. Sep 4 17:30:44.999552 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:30:45.000180 systemd-logind[1777]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:30:45.001052 systemd-logind[1777]: Removed session 8. Sep 4 17:30:45.099575 systemd[1]: Started sshd@6-10.200.8.37:22-10.200.16.10:35136.service - OpenSSH per-connection server daemon (10.200.16.10:35136). Sep 4 17:30:45.719410 sshd[2519]: Accepted publickey for core from 10.200.16.10 port 35136 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:30:45.721119 sshd[2519]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:45.725743 systemd-logind[1777]: New session 9 of user core. Sep 4 17:30:45.736068 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:30:46.063474 sudo[2523]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:30:46.063861 sudo[2523]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:30:47.142550 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:30:47.144802 (dockerd)[2532]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:30:49.543048 dockerd[2532]: time="2024-09-04T17:30:49.542980980Z" level=info msg="Starting up" Sep 4 17:30:49.549536 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Sep 4 17:30:49.558563 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:30:50.096413 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:30:50.100343 (kubelet)[2553]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:30:50.575765 kubelet[2553]: E0904 17:30:50.575577 2553 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:30:50.578483 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:30:50.578791 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:30:50.920046 dockerd[2532]: time="2024-09-04T17:30:50.920003597Z" level=info msg="Loading containers: start." Sep 4 17:30:51.138260 kernel: Initializing XFRM netlink socket Sep 4 17:30:51.240101 systemd-networkd[1373]: docker0: Link UP Sep 4 17:30:51.295058 dockerd[2532]: time="2024-09-04T17:30:51.295016498Z" level=info msg="Loading containers: done." Sep 4 17:30:51.866300 dockerd[2532]: time="2024-09-04T17:30:51.866254611Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:30:51.866522 dockerd[2532]: time="2024-09-04T17:30:51.866493713Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Sep 4 17:30:51.866651 dockerd[2532]: time="2024-09-04T17:30:51.866627914Z" level=info msg="Daemon has completed initialization" Sep 4 17:30:51.920497 dockerd[2532]: time="2024-09-04T17:30:51.920441002Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:30:51.920994 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:30:55.136869 containerd[1805]: time="2024-09-04T17:30:55.136829205Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\"" Sep 4 17:30:57.941694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1745312946.mount: Deactivated successfully. Sep 4 17:31:00.651207 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Sep 4 17:31:00.657488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:31:00.758476 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:31:00.760762 (kubelet)[2703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:31:01.317919 kubelet[2703]: E0904 17:31:01.317859 2703 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:31:01.320618 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:31:01.320947 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:31:08.922083 containerd[1805]: time="2024-09-04T17:31:08.921967451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:08.927780 containerd[1805]: time="2024-09-04T17:31:08.927720682Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.13: active requests=0, bytes read=34530743" Sep 4 17:31:08.935627 containerd[1805]: time="2024-09-04T17:31:08.935569524Z" level=info msg="ImageCreate event name:\"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:08.942345 containerd[1805]: time="2024-09-04T17:31:08.942275160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:08.943526 containerd[1805]: time="2024-09-04T17:31:08.943308366Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.13\" with image id \"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\", size \"34527535\" in 13.806437561s" Sep 4 17:31:08.943526 containerd[1805]: time="2024-09-04T17:31:08.943352966Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\" returns image reference \"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\"" Sep 4 17:31:08.964696 containerd[1805]: time="2024-09-04T17:31:08.964658081Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\"" Sep 4 17:31:10.985597 containerd[1805]: time="2024-09-04T17:31:10.985476259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:10.990518 containerd[1805]: time="2024-09-04T17:31:10.990459086Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.13: active requests=0, bytes read=31849717" Sep 4 17:31:10.996204 containerd[1805]: time="2024-09-04T17:31:10.996163917Z" level=info msg="ImageCreate event name:\"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:11.001029 containerd[1805]: time="2024-09-04T17:31:11.000977843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:11.002128 containerd[1805]: time="2024-09-04T17:31:11.001978548Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.13\" with image id \"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\", size \"33399655\" in 2.037279267s" Sep 4 17:31:11.002128 containerd[1805]: time="2024-09-04T17:31:11.002018449Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\" returns image reference \"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\"" Sep 4 17:31:11.023259 containerd[1805]: time="2024-09-04T17:31:11.023217564Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\"" Sep 4 17:31:11.401475 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Sep 4 17:31:11.406925 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:31:11.722668 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:31:11.723405 (kubelet)[2779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:31:11.765755 kubelet[2779]: E0904 17:31:11.765709 2779 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:31:11.768240 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:31:11.769149 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:31:15.309332 containerd[1805]: time="2024-09-04T17:31:15.309278885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:15.353489 containerd[1805]: time="2024-09-04T17:31:15.353303123Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.13: active requests=0, bytes read=17097785" Sep 4 17:31:15.402406 containerd[1805]: time="2024-09-04T17:31:15.402332189Z" level=info msg="ImageCreate event name:\"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:15.463354 containerd[1805]: time="2024-09-04T17:31:15.463268519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:15.468485 containerd[1805]: time="2024-09-04T17:31:15.468328947Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.13\" with image id \"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\", size \"18647741\" in 4.445059283s" Sep 4 17:31:15.468485 containerd[1805]: time="2024-09-04T17:31:15.468387147Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\" returns image reference \"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\"" Sep 4 17:31:15.491094 containerd[1805]: time="2024-09-04T17:31:15.491030770Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\"" Sep 4 17:31:20.113449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount280231645.mount: Deactivated successfully. Sep 4 17:31:21.201582 containerd[1805]: time="2024-09-04T17:31:21.201518221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:21.203589 containerd[1805]: time="2024-09-04T17:31:21.203515732Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.13: active requests=0, bytes read=28303457" Sep 4 17:31:21.265459 containerd[1805]: time="2024-09-04T17:31:21.265379282Z" level=info msg="ImageCreate event name:\"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:21.272860 containerd[1805]: time="2024-09-04T17:31:21.272806824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:21.273695 containerd[1805]: time="2024-09-04T17:31:21.273541128Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.13\" with image id \"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\", repo tag \"registry.k8s.io/kube-proxy:v1.28.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\", size \"28302468\" in 5.782472758s" Sep 4 17:31:21.273695 containerd[1805]: time="2024-09-04T17:31:21.273579029Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\" returns image reference \"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\"" Sep 4 17:31:21.295090 containerd[1805]: time="2024-09-04T17:31:21.295046050Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:31:21.901206 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Sep 4 17:31:21.908463 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:31:22.006431 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:31:22.010923 (kubelet)[2820]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:31:22.601997 kubelet[2820]: E0904 17:31:22.601933 2820 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:31:22.604590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:31:22.604924 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:31:27.769826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1582140177.mount: Deactivated successfully. Sep 4 17:31:27.957409 containerd[1805]: time="2024-09-04T17:31:27.957342716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:28.006911 containerd[1805]: time="2024-09-04T17:31:28.006809384Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Sep 4 17:31:28.010431 containerd[1805]: time="2024-09-04T17:31:28.010354803Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:28.066124 containerd[1805]: time="2024-09-04T17:31:28.065849603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:28.070291 containerd[1805]: time="2024-09-04T17:31:28.070249027Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 6.775145877s" Sep 4 17:31:28.070291 containerd[1805]: time="2024-09-04T17:31:28.070288627Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Sep 4 17:31:28.091877 containerd[1805]: time="2024-09-04T17:31:28.091839944Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 17:31:29.780779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2945870723.mount: Deactivated successfully. Sep 4 17:31:32.651105 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Sep 4 17:31:32.657454 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:31:32.817552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:31:32.828690 (kubelet)[2894]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:31:33.288458 kubelet[2894]: E0904 17:31:33.288363 2894 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:31:33.291851 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:31:33.292119 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:31:36.507764 containerd[1805]: time="2024-09-04T17:31:36.507710201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:36.511294 containerd[1805]: time="2024-09-04T17:31:36.511202816Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Sep 4 17:31:36.559019 containerd[1805]: time="2024-09-04T17:31:36.558941118Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:36.604545 containerd[1805]: time="2024-09-04T17:31:36.604465610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:36.606634 containerd[1805]: time="2024-09-04T17:31:36.606067517Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 8.514185873s" Sep 4 17:31:36.606634 containerd[1805]: time="2024-09-04T17:31:36.606114117Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Sep 4 17:31:36.627972 containerd[1805]: time="2024-09-04T17:31:36.627944410Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Sep 4 17:31:38.418562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1287476947.mount: Deactivated successfully. Sep 4 17:31:39.354371 containerd[1805]: time="2024-09-04T17:31:39.354298638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:39.358013 containerd[1805]: time="2024-09-04T17:31:39.357939254Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191757" Sep 4 17:31:39.421633 containerd[1805]: time="2024-09-04T17:31:39.421082021Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:39.464161 containerd[1805]: time="2024-09-04T17:31:39.464066102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:39.465505 containerd[1805]: time="2024-09-04T17:31:39.465190807Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 2.837182997s" Sep 4 17:31:39.465505 containerd[1805]: time="2024-09-04T17:31:39.465255807Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Sep 4 17:31:42.400327 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:31:42.407504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:31:42.434533 systemd[1]: Reloading requested from client PID 2981 ('systemctl') (unit session-9.scope)... Sep 4 17:31:42.434547 systemd[1]: Reloading... Sep 4 17:31:42.515258 zram_generator::config[3018]: No configuration found. Sep 4 17:31:42.640006 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:31:42.717016 systemd[1]: Reloading finished in 282 ms. Sep 4 17:31:43.616195 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:31:43.616376 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:31:43.616814 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:31:43.628745 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:31:51.540419 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:31:51.545475 (kubelet)[3095]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:31:51.931724 kubelet[3095]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:31:51.931724 kubelet[3095]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:31:51.931724 kubelet[3095]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:31:51.932258 kubelet[3095]: I0904 17:31:51.931774 3095 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:31:52.377491 kubelet[3095]: I0904 17:31:52.377379 3095 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 17:31:52.377491 kubelet[3095]: I0904 17:31:52.377408 3095 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:31:52.378143 kubelet[3095]: I0904 17:31:52.377681 3095 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 17:31:52.397447 kubelet[3095]: E0904 17:31:52.397418 3095 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.37:6443: connect: connection refused Sep 4 17:31:52.397852 kubelet[3095]: I0904 17:31:52.397709 3095 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:31:52.406997 kubelet[3095]: I0904 17:31:52.406969 3095 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:31:52.407382 kubelet[3095]: I0904 17:31:52.407361 3095 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:31:52.407560 kubelet[3095]: I0904 17:31:52.407539 3095 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:31:52.408216 kubelet[3095]: I0904 17:31:52.408190 3095 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:31:52.408216 kubelet[3095]: I0904 17:31:52.408219 3095 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:31:52.409025 kubelet[3095]: I0904 17:31:52.408997 3095 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:31:52.410390 kubelet[3095]: I0904 17:31:52.410371 3095 kubelet.go:393] "Attempting to sync node with API server" Sep 4 17:31:52.410390 kubelet[3095]: I0904 17:31:52.410396 3095 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:31:52.410639 kubelet[3095]: I0904 17:31:52.410425 3095 kubelet.go:309] "Adding apiserver pod source" Sep 4 17:31:52.410639 kubelet[3095]: I0904 17:31:52.410444 3095 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:31:52.414047 kubelet[3095]: W0904 17:31:52.413877 3095 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Sep 4 17:31:52.414047 kubelet[3095]: E0904 17:31:52.413927 3095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Sep 4 17:31:52.415106 kubelet[3095]: W0904 17:31:52.415065 3095 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-a-eeaffe6a3f&limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Sep 4 17:31:52.415596 kubelet[3095]: E0904 17:31:52.415219 3095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-a-eeaffe6a3f&limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Sep 4 17:31:52.415596 kubelet[3095]: I0904 17:31:52.415323 3095 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:31:52.417959 kubelet[3095]: W0904 17:31:52.417932 3095 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:31:52.418522 kubelet[3095]: I0904 17:31:52.418501 3095 server.go:1232] "Started kubelet" Sep 4 17:31:52.418732 kubelet[3095]: I0904 17:31:52.418709 3095 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:31:52.419688 kubelet[3095]: I0904 17:31:52.419632 3095 server.go:462] "Adding debug handlers to kubelet server" Sep 4 17:31:52.421487 kubelet[3095]: I0904 17:31:52.420988 3095 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 17:31:52.421487 kubelet[3095]: I0904 17:31:52.421269 3095 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:31:52.421906 kubelet[3095]: E0904 17:31:52.421459 3095 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3975.2.1-a-eeaffe6a3f.17f21ad5b5e6c47f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3975.2.1-a-eeaffe6a3f", UID:"ci-3975.2.1-a-eeaffe6a3f", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3975.2.1-a-eeaffe6a3f"}, FirstTimestamp:time.Date(2024, time.September, 4, 17, 31, 52, 418477183, time.Local), LastTimestamp:time.Date(2024, time.September, 4, 17, 31, 52, 418477183, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3975.2.1-a-eeaffe6a3f"}': 'Post "https://10.200.8.37:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.37:6443: connect: connection refused'(may retry after sleeping) Sep 4 17:31:52.421906 kubelet[3095]: E0904 17:31:52.421851 3095 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 17:31:52.421906 kubelet[3095]: E0904 17:31:52.421877 3095 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:31:52.422986 kubelet[3095]: I0904 17:31:52.422828 3095 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:31:52.424583 kubelet[3095]: I0904 17:31:52.424565 3095 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:31:52.426611 kubelet[3095]: I0904 17:31:52.426595 3095 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:31:52.427058 kubelet[3095]: I0904 17:31:52.427040 3095 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:31:52.428388 kubelet[3095]: W0904 17:31:52.428351 3095 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Sep 4 17:31:52.428511 kubelet[3095]: E0904 17:31:52.428499 3095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Sep 4 17:31:52.428700 kubelet[3095]: E0904 17:31:52.428686 3095 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-a-eeaffe6a3f?timeout=10s\": dial tcp 10.200.8.37:6443: connect: connection refused" interval="200ms" Sep 4 17:31:52.467210 kubelet[3095]: I0904 17:31:52.467193 3095 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:31:52.468996 kubelet[3095]: I0904 17:31:52.468964 3095 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:31:52.469175 kubelet[3095]: I0904 17:31:52.469097 3095 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:31:52.469175 kubelet[3095]: I0904 17:31:52.469123 3095 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 17:31:52.470524 kubelet[3095]: E0904 17:31:52.469872 3095 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:31:52.470524 kubelet[3095]: W0904 17:31:52.470186 3095 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Sep 4 17:31:52.470524 kubelet[3095]: E0904 17:31:52.470224 3095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Sep 4 17:31:52.487492 kubelet[3095]: I0904 17:31:52.487476 3095 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:31:52.487589 kubelet[3095]: I0904 17:31:52.487545 3095 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:31:52.487589 kubelet[3095]: I0904 17:31:52.487565 3095 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:31:52.492883 kubelet[3095]: I0904 17:31:52.492848 3095 policy_none.go:49] "None policy: Start" Sep 4 17:31:52.493601 kubelet[3095]: I0904 17:31:52.493585 3095 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 17:31:52.493678 kubelet[3095]: I0904 17:31:52.493663 3095 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:31:52.501249 kubelet[3095]: I0904 17:31:52.500448 3095 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:31:52.501249 kubelet[3095]: I0904 17:31:52.500680 3095 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:31:52.503686 kubelet[3095]: E0904 17:31:52.503665 3095 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3975.2.1-a-eeaffe6a3f\" not found" Sep 4 17:31:52.527379 kubelet[3095]: I0904 17:31:52.527356 3095 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:31:52.527677 kubelet[3095]: E0904 17:31:52.527654 3095 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.37:6443/api/v1/nodes\": dial tcp 10.200.8.37:6443: connect: connection refused" node="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:31:52.570902 kubelet[3095]: I0904 17:31:52.570879 3095 topology_manager.go:215] "Topology Admit Handler" podUID="c1a00d6633523da27fead1b7702c41c8" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:31:52.572421 kubelet[3095]: I0904 17:31:52.572396 3095 topology_manager.go:215] "Topology Admit Handler" podUID="aef68cc064a7b97c83b688f2bff9e956" podNamespace="kube-system" podName="kube-scheduler-ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:31:52.573966 kubelet[3095]: I0904 17:31:52.573941 3095 topology_manager.go:215] "Topology Admit Handler" podUID="828eae56d7840abad2c83b0f2e61a23f" podNamespace="kube-system" podName="kube-apiserver-ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:31:52.627696 kubelet[3095]: I0904 17:31:52.627517 3095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c1a00d6633523da27fead1b7702c41c8-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.2.1-a-eeaffe6a3f\" (UID: \"c1a00d6633523da27fead1b7702c41c8\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:31:52.627696 kubelet[3095]: I0904 17:31:52.627570 3095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1a00d6633523da27fead1b7702c41c8-k8s-certs\") pod \"kube-controller-manager-ci-3975.2.1-a-eeaffe6a3f\" (UID: \"c1a00d6633523da27fead1b7702c41c8\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:31:52.627696 kubelet[3095]: I0904 17:31:52.627609 3095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aef68cc064a7b97c83b688f2bff9e956-kubeconfig\") pod \"kube-scheduler-ci-3975.2.1-a-eeaffe6a3f\" (UID: \"aef68cc064a7b97c83b688f2bff9e956\") " pod="kube-system/kube-scheduler-ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:31:52.627696 kubelet[3095]: I0904 17:31:52.627648 3095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/828eae56d7840abad2c83b0f2e61a23f-ca-certs\") pod \"kube-apiserver-ci-3975.2.1-a-eeaffe6a3f\" (UID: \"828eae56d7840abad2c83b0f2e61a23f\") " pod="kube-system/kube-apiserver-ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:31:52.627696 kubelet[3095]: I0904 17:31:52.627684 3095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/828eae56d7840abad2c83b0f2e61a23f-k8s-certs\") pod \"kube-apiserver-ci-3975.2.1-a-eeaffe6a3f\" (UID: \"828eae56d7840abad2c83b0f2e61a23f\") " pod="kube-system/kube-apiserver-ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:31:52.628022 kubelet[3095]: I0904 17:31:52.627721 3095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/828eae56d7840abad2c83b0f2e61a23f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.2.1-a-eeaffe6a3f\" (UID: \"828eae56d7840abad2c83b0f2e61a23f\") " pod="kube-system/kube-apiserver-ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:31:52.628022 kubelet[3095]: I0904 17:31:52.627753 3095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1a00d6633523da27fead1b7702c41c8-ca-certs\") pod \"kube-controller-manager-ci-3975.2.1-a-eeaffe6a3f\" (UID: \"c1a00d6633523da27fead1b7702c41c8\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:31:52.628022 kubelet[3095]: I0904 17:31:52.627788 3095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c1a00d6633523da27fead1b7702c41c8-kubeconfig\") pod \"kube-controller-manager-ci-3975.2.1-a-eeaffe6a3f\" (UID: \"c1a00d6633523da27fead1b7702c41c8\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:31:52.628022 kubelet[3095]: I0904 17:31:52.627823 3095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1a00d6633523da27fead1b7702c41c8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.2.1-a-eeaffe6a3f\" (UID: \"c1a00d6633523da27fead1b7702c41c8\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:31:52.629811 kubelet[3095]: E0904 17:31:52.629698 3095 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-a-eeaffe6a3f?timeout=10s\": dial tcp 10.200.8.37:6443: connect: connection refused" interval="400ms" Sep 4 17:31:52.730720 kubelet[3095]: I0904 17:31:52.730681 3095 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:31:52.731074 kubelet[3095]: E0904 17:31:52.731049 3095 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.37:6443/api/v1/nodes\": dial tcp 10.200.8.37:6443: connect: connection refused" node="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:31:52.878063 containerd[1805]: time="2024-09-04T17:31:52.877941266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.2.1-a-eeaffe6a3f,Uid:c1a00d6633523da27fead1b7702c41c8,Namespace:kube-system,Attempt:0,}" Sep 4 17:31:52.880585 containerd[1805]: time="2024-09-04T17:31:52.880539393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.2.1-a-eeaffe6a3f,Uid:aef68cc064a7b97c83b688f2bff9e956,Namespace:kube-system,Attempt:0,}" Sep 4 17:31:52.885809 containerd[1805]: time="2024-09-04T17:31:52.885514244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.2.1-a-eeaffe6a3f,Uid:828eae56d7840abad2c83b0f2e61a23f,Namespace:kube-system,Attempt:0,}" Sep 4 17:31:53.030433 kubelet[3095]: E0904 17:31:53.030405 3095 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-a-eeaffe6a3f?timeout=10s\": dial tcp 10.200.8.37:6443: connect: connection refused" interval="800ms" Sep 4 17:31:53.133739 kubelet[3095]: I0904 17:31:53.133630 3095 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:31:53.134106 kubelet[3095]: E0904 17:31:53.134067 3095 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.37:6443/api/v1/nodes\": dial tcp 10.200.8.37:6443: connect: connection refused" node="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:31:53.313534 kubelet[3095]: W0904 17:31:53.313133 3095 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Sep 4 17:31:53.313534 kubelet[3095]: E0904 17:31:53.313212 3095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Sep 4 17:31:53.486586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3407812351.mount: Deactivated successfully. Sep 4 17:31:53.509043 kubelet[3095]: W0904 17:31:53.509014 3095 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Sep 4 17:31:53.509142 kubelet[3095]: E0904 17:31:53.509053 3095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Sep 4 17:31:53.513353 containerd[1805]: time="2024-09-04T17:31:53.513313243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:31:53.516470 containerd[1805]: time="2024-09-04T17:31:53.516424575Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Sep 4 17:31:53.518435 containerd[1805]: time="2024-09-04T17:31:53.518403195Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:31:53.521664 containerd[1805]: time="2024-09-04T17:31:53.521604028Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:31:53.524999 containerd[1805]: time="2024-09-04T17:31:53.524868461Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:31:53.528810 containerd[1805]: time="2024-09-04T17:31:53.528770901Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:31:53.531804 containerd[1805]: time="2024-09-04T17:31:53.531532329Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:31:53.536588 containerd[1805]: time="2024-09-04T17:31:53.536505780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:31:53.537887 containerd[1805]: time="2024-09-04T17:31:53.537554791Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 656.901296ms" Sep 4 17:31:53.539031 containerd[1805]: time="2024-09-04T17:31:53.538997105Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 660.943038ms" Sep 4 17:31:53.542565 containerd[1805]: time="2024-09-04T17:31:53.542533641Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 656.922596ms" Sep 4 17:31:53.733073 kubelet[3095]: W0904 17:31:53.733012 3095 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-a-eeaffe6a3f&limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Sep 4 17:31:53.733073 kubelet[3095]: E0904 17:31:53.733076 3095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-a-eeaffe6a3f&limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Sep 4 17:31:53.830927 kubelet[3095]: E0904 17:31:53.830819 3095 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-a-eeaffe6a3f?timeout=10s\": dial tcp 10.200.8.37:6443: connect: connection refused" interval="1.6s" Sep 4 17:31:53.884593 kubelet[3095]: W0904 17:31:53.884556 3095 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Sep 4 17:31:53.884727 kubelet[3095]: E0904 17:31:53.884601 3095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Sep 4 17:31:53.936323 kubelet[3095]: I0904 17:31:53.936281 3095 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:31:53.936663 kubelet[3095]: E0904 17:31:53.936636 3095 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.37:6443/api/v1/nodes\": dial tcp 10.200.8.37:6443: connect: connection refused" node="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:31:54.428285 kubelet[3095]: E0904 17:31:54.428251 3095 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.37:6443: connect: connection refused Sep 4 17:31:54.662872 kubelet[3095]: E0904 17:31:54.662756 3095 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3975.2.1-a-eeaffe6a3f.17f21ad5b5e6c47f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3975.2.1-a-eeaffe6a3f", UID:"ci-3975.2.1-a-eeaffe6a3f", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3975.2.1-a-eeaffe6a3f"}, FirstTimestamp:time.Date(2024, time.September, 4, 17, 31, 52, 418477183, time.Local), LastTimestamp:time.Date(2024, time.September, 4, 17, 31, 52, 418477183, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3975.2.1-a-eeaffe6a3f"}': 'Post "https://10.200.8.37:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.37:6443: connect: connection refused'(may retry after sleeping) Sep 4 17:31:55.292318 kubelet[3095]: W0904 17:31:55.292278 3095 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Sep 4 17:31:55.292318 kubelet[3095]: E0904 17:31:55.292322 3095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Sep 4 17:31:55.361256 containerd[1805]: time="2024-09-04T17:31:55.359284361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:31:55.361256 containerd[1805]: time="2024-09-04T17:31:55.359334462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:31:55.361256 containerd[1805]: time="2024-09-04T17:31:55.359352262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:31:55.361256 containerd[1805]: time="2024-09-04T17:31:55.359365162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:31:55.372535 containerd[1805]: time="2024-09-04T17:31:55.369368864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:31:55.372535 containerd[1805]: time="2024-09-04T17:31:55.369438865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:31:55.372535 containerd[1805]: time="2024-09-04T17:31:55.369466965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:31:55.372535 containerd[1805]: time="2024-09-04T17:31:55.369487165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:31:55.373861 containerd[1805]: time="2024-09-04T17:31:55.373758609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:31:55.374273 containerd[1805]: time="2024-09-04T17:31:55.373834309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:31:55.374707 containerd[1805]: time="2024-09-04T17:31:55.374640418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:31:55.374787 containerd[1805]: time="2024-09-04T17:31:55.374731519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:31:55.425452 systemd[1]: run-containerd-runc-k8s.io-fdf64dd7ef7588c085b35eb08f875151a0da762e0d27bab7b9f28171f12e43e0-runc.g1jAMN.mount: Deactivated successfully. Sep 4 17:31:55.441412 kubelet[3095]: E0904 17:31:55.441389 3095 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-a-eeaffe6a3f?timeout=10s\": dial tcp 10.200.8.37:6443: connect: connection refused" interval="3.2s" Sep 4 17:31:55.513730 containerd[1805]: time="2024-09-04T17:31:55.513497033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.2.1-a-eeaffe6a3f,Uid:aef68cc064a7b97c83b688f2bff9e956,Namespace:kube-system,Attempt:0,} returns sandbox id \"ceb24d64eefc1aa4e9fb74a7cc71227d524034ecac70c4502f357692a9ce8872\"" Sep 4 17:31:55.517890 containerd[1805]: time="2024-09-04T17:31:55.517746077Z" level=info msg="CreateContainer within sandbox \"ceb24d64eefc1aa4e9fb74a7cc71227d524034ecac70c4502f357692a9ce8872\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:31:55.520914 containerd[1805]: time="2024-09-04T17:31:55.520771507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.2.1-a-eeaffe6a3f,Uid:c1a00d6633523da27fead1b7702c41c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb101acf7dd58c81930075946803ced69f84d287851c208eacdaf78b6ec67c83\"" Sep 4 17:31:55.521154 containerd[1805]: time="2024-09-04T17:31:55.521103411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.2.1-a-eeaffe6a3f,Uid:828eae56d7840abad2c83b0f2e61a23f,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdf64dd7ef7588c085b35eb08f875151a0da762e0d27bab7b9f28171f12e43e0\"" Sep 4 17:31:55.524295 containerd[1805]: time="2024-09-04T17:31:55.523785238Z" level=info msg="CreateContainer within sandbox \"eb101acf7dd58c81930075946803ced69f84d287851c208eacdaf78b6ec67c83\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:31:55.524556 containerd[1805]: time="2024-09-04T17:31:55.524531146Z" level=info msg="CreateContainer within sandbox \"fdf64dd7ef7588c085b35eb08f875151a0da762e0d27bab7b9f28171f12e43e0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:31:55.538871 kubelet[3095]: I0904 17:31:55.538850 3095 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:31:55.539172 kubelet[3095]: E0904 17:31:55.539155 3095 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.37:6443/api/v1/nodes\": dial tcp 10.200.8.37:6443: connect: connection refused" node="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:31:55.568657 containerd[1805]: time="2024-09-04T17:31:55.568570495Z" level=info msg="CreateContainer within sandbox \"ceb24d64eefc1aa4e9fb74a7cc71227d524034ecac70c4502f357692a9ce8872\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"61fcd8e5fb955d334c95d41c971fbcaaca4e43f83ef613c5399303e363d1e9c0\"" Sep 4 17:31:55.569781 containerd[1805]: time="2024-09-04T17:31:55.569707306Z" level=info msg="StartContainer for \"61fcd8e5fb955d334c95d41c971fbcaaca4e43f83ef613c5399303e363d1e9c0\"" Sep 4 17:31:55.594015 containerd[1805]: time="2024-09-04T17:31:55.593912653Z" level=info msg="CreateContainer within sandbox \"eb101acf7dd58c81930075946803ced69f84d287851c208eacdaf78b6ec67c83\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5ac9909d29811cc4e05d09dfcf79107a52a14bd28ed5575c7576d87017bfdf72\"" Sep 4 17:31:55.596330 containerd[1805]: time="2024-09-04T17:31:55.596289577Z" level=info msg="StartContainer for \"5ac9909d29811cc4e05d09dfcf79107a52a14bd28ed5575c7576d87017bfdf72\"" Sep 4 17:31:55.601540 containerd[1805]: time="2024-09-04T17:31:55.601504230Z" level=info msg="CreateContainer within sandbox \"fdf64dd7ef7588c085b35eb08f875151a0da762e0d27bab7b9f28171f12e43e0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2f15a4825e48b4a1bc59a6fb80ee6ddac7adec7cb335d53fbe54daca8c6f585d\"" Sep 4 17:31:55.603311 containerd[1805]: time="2024-09-04T17:31:55.603287149Z" level=info msg="StartContainer for \"2f15a4825e48b4a1bc59a6fb80ee6ddac7adec7cb335d53fbe54daca8c6f585d\"" Sep 4 17:31:55.684577 containerd[1805]: time="2024-09-04T17:31:55.684531477Z" level=info msg="StartContainer for \"61fcd8e5fb955d334c95d41c971fbcaaca4e43f83ef613c5399303e363d1e9c0\" returns successfully" Sep 4 17:31:55.728521 containerd[1805]: time="2024-09-04T17:31:55.728326623Z" level=info msg="StartContainer for \"5ac9909d29811cc4e05d09dfcf79107a52a14bd28ed5575c7576d87017bfdf72\" returns successfully" Sep 4 17:31:55.747316 containerd[1805]: time="2024-09-04T17:31:55.746920313Z" level=info msg="StartContainer for \"2f15a4825e48b4a1bc59a6fb80ee6ddac7adec7cb335d53fbe54daca8c6f585d\" returns successfully" Sep 4 17:31:58.416056 kubelet[3095]: I0904 17:31:58.416011 3095 apiserver.go:52] "Watching apiserver" Sep 4 17:31:58.427277 kubelet[3095]: I0904 17:31:58.427243 3095 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:31:58.580356 kubelet[3095]: E0904 17:31:58.580307 3095 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3975.2.1-a-eeaffe6a3f" not found Sep 4 17:31:58.645386 kubelet[3095]: E0904 17:31:58.645351 3095 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3975.2.1-a-eeaffe6a3f\" not found" node="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:31:58.741841 kubelet[3095]: I0904 17:31:58.741739 3095 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:31:58.747651 kubelet[3095]: I0904 17:31:58.747625 3095 kubelet_node_status.go:73] "Successfully registered node" node="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:31:59.622948 kubelet[3095]: W0904 17:31:59.622901 3095 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:32:00.468950 kubelet[3095]: W0904 17:32:00.468431 3095 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:32:00.552071 systemd[1]: Reloading requested from client PID 3370 ('systemctl') (unit session-9.scope)... Sep 4 17:32:00.552086 systemd[1]: Reloading... Sep 4 17:32:00.644257 zram_generator::config[3410]: No configuration found. Sep 4 17:32:00.765522 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:32:00.851712 systemd[1]: Reloading finished in 299 ms. Sep 4 17:32:00.885434 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:32:00.894540 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:32:00.895075 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:32:00.903486 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:32:05.546447 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:32:05.556141 (kubelet)[3484]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:32:05.600299 kubelet[3484]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:32:05.600299 kubelet[3484]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:32:05.600299 kubelet[3484]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:32:05.600299 kubelet[3484]: I0904 17:32:05.599454 3484 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:32:05.604362 kubelet[3484]: I0904 17:32:05.604336 3484 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 17:32:05.604362 kubelet[3484]: I0904 17:32:05.604357 3484 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:32:05.604554 kubelet[3484]: I0904 17:32:05.604536 3484 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 17:32:05.605813 kubelet[3484]: I0904 17:32:05.605791 3484 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:32:05.606796 kubelet[3484]: I0904 17:32:05.606694 3484 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:32:05.614214 kubelet[3484]: I0904 17:32:05.613225 3484 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:32:05.614214 kubelet[3484]: I0904 17:32:05.613689 3484 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:32:05.614214 kubelet[3484]: I0904 17:32:05.613871 3484 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:32:05.614214 kubelet[3484]: I0904 17:32:05.613899 3484 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:32:05.614214 kubelet[3484]: I0904 17:32:05.613908 3484 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:32:05.614214 kubelet[3484]: I0904 17:32:05.613947 3484 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:32:05.614555 kubelet[3484]: I0904 17:32:05.614030 3484 kubelet.go:393] "Attempting to sync node with API server" Sep 4 17:32:05.614555 kubelet[3484]: I0904 17:32:05.614043 3484 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:32:05.614555 kubelet[3484]: I0904 17:32:05.614113 3484 kubelet.go:309] "Adding apiserver pod source" Sep 4 17:32:05.614555 kubelet[3484]: I0904 17:32:05.614133 3484 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:32:05.618530 kubelet[3484]: I0904 17:32:05.618510 3484 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:32:05.622868 kubelet[3484]: I0904 17:32:05.622849 3484 server.go:1232] "Started kubelet" Sep 4 17:32:05.627794 kubelet[3484]: I0904 17:32:05.627776 3484 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:32:05.630253 kubelet[3484]: I0904 17:32:05.629845 3484 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:32:05.631545 kubelet[3484]: E0904 17:32:05.631530 3484 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 17:32:05.633332 kubelet[3484]: E0904 17:32:05.633315 3484 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:32:05.634512 kubelet[3484]: I0904 17:32:05.632343 3484 server.go:462] "Adding debug handlers to kubelet server" Sep 4 17:32:05.636987 kubelet[3484]: I0904 17:32:05.632384 3484 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 17:32:05.637304 kubelet[3484]: I0904 17:32:05.637289 3484 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:32:05.637434 kubelet[3484]: I0904 17:32:05.637331 3484 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:32:05.638264 kubelet[3484]: I0904 17:32:05.637377 3484 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:32:05.638264 kubelet[3484]: I0904 17:32:05.637938 3484 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:32:05.655503 kubelet[3484]: I0904 17:32:05.655484 3484 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:32:05.656845 kubelet[3484]: I0904 17:32:05.656827 3484 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:32:05.656954 kubelet[3484]: I0904 17:32:05.656943 3484 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:32:05.657036 kubelet[3484]: I0904 17:32:05.657026 3484 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 17:32:05.657154 kubelet[3484]: E0904 17:32:05.657143 3484 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:32:05.742547 kubelet[3484]: I0904 17:32:05.742517 3484 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:32:05.742547 kubelet[3484]: I0904 17:32:05.742544 3484 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:32:05.742725 kubelet[3484]: I0904 17:32:05.742564 3484 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:32:05.742725 kubelet[3484]: I0904 17:32:05.742719 3484 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:32:05.742828 kubelet[3484]: I0904 17:32:05.742747 3484 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:32:05.742828 kubelet[3484]: I0904 17:32:05.742757 3484 policy_none.go:49] "None policy: Start" Sep 4 17:32:05.743661 kubelet[3484]: I0904 17:32:05.743425 3484 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:05.745540 kubelet[3484]: I0904 17:32:05.743858 3484 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 17:32:05.745540 kubelet[3484]: I0904 17:32:05.743884 3484 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:32:05.745935 kubelet[3484]: I0904 17:32:05.745916 3484 state_mem.go:75] "Updated machine memory state" Sep 4 17:32:05.747064 kubelet[3484]: I0904 17:32:05.747045 3484 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:32:05.747901 kubelet[3484]: I0904 17:32:05.747399 3484 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:32:05.755721 kubelet[3484]: I0904 17:32:05.755451 3484 kubelet_node_status.go:108] "Node was previously registered" node="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:05.755721 kubelet[3484]: I0904 17:32:05.755531 3484 kubelet_node_status.go:73] "Successfully registered node" node="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:05.757515 kubelet[3484]: I0904 17:32:05.757415 3484 topology_manager.go:215] "Topology Admit Handler" podUID="828eae56d7840abad2c83b0f2e61a23f" podNamespace="kube-system" podName="kube-apiserver-ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:05.759789 kubelet[3484]: I0904 17:32:05.759491 3484 topology_manager.go:215] "Topology Admit Handler" podUID="c1a00d6633523da27fead1b7702c41c8" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:05.759789 kubelet[3484]: I0904 17:32:05.759555 3484 topology_manager.go:215] "Topology Admit Handler" podUID="aef68cc064a7b97c83b688f2bff9e956" podNamespace="kube-system" podName="kube-scheduler-ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:05.773805 kubelet[3484]: W0904 17:32:05.773591 3484 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:32:10.257851 kubelet[3484]: W0904 17:32:05.775941 3484 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:32:10.257851 kubelet[3484]: E0904 17:32:05.776035 3484 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3975.2.1-a-eeaffe6a3f\" already exists" pod="kube-system/kube-apiserver-ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:10.257851 kubelet[3484]: W0904 17:32:05.776267 3484 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:32:10.257851 kubelet[3484]: E0904 17:32:05.776342 3484 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3975.2.1-a-eeaffe6a3f\" already exists" pod="kube-system/kube-controller-manager-ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:10.257851 kubelet[3484]: I0904 17:32:05.839559 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/828eae56d7840abad2c83b0f2e61a23f-ca-certs\") pod \"kube-apiserver-ci-3975.2.1-a-eeaffe6a3f\" (UID: \"828eae56d7840abad2c83b0f2e61a23f\") " pod="kube-system/kube-apiserver-ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:10.257851 kubelet[3484]: I0904 17:32:05.839683 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1a00d6633523da27fead1b7702c41c8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.2.1-a-eeaffe6a3f\" (UID: \"c1a00d6633523da27fead1b7702c41c8\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:10.257851 kubelet[3484]: I0904 17:32:05.839724 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aef68cc064a7b97c83b688f2bff9e956-kubeconfig\") pod \"kube-scheduler-ci-3975.2.1-a-eeaffe6a3f\" (UID: \"aef68cc064a7b97c83b688f2bff9e956\") " pod="kube-system/kube-scheduler-ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:10.258579 kubelet[3484]: I0904 17:32:05.839769 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c1a00d6633523da27fead1b7702c41c8-kubeconfig\") pod \"kube-controller-manager-ci-3975.2.1-a-eeaffe6a3f\" (UID: \"c1a00d6633523da27fead1b7702c41c8\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:10.258579 kubelet[3484]: I0904 17:32:05.839821 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/828eae56d7840abad2c83b0f2e61a23f-k8s-certs\") pod \"kube-apiserver-ci-3975.2.1-a-eeaffe6a3f\" (UID: \"828eae56d7840abad2c83b0f2e61a23f\") " pod="kube-system/kube-apiserver-ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:10.258579 kubelet[3484]: I0904 17:32:05.839878 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/828eae56d7840abad2c83b0f2e61a23f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.2.1-a-eeaffe6a3f\" (UID: \"828eae56d7840abad2c83b0f2e61a23f\") " pod="kube-system/kube-apiserver-ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:10.258579 kubelet[3484]: I0904 17:32:05.839989 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1a00d6633523da27fead1b7702c41c8-ca-certs\") pod \"kube-controller-manager-ci-3975.2.1-a-eeaffe6a3f\" (UID: \"c1a00d6633523da27fead1b7702c41c8\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:10.258579 kubelet[3484]: I0904 17:32:05.840016 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c1a00d6633523da27fead1b7702c41c8-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.2.1-a-eeaffe6a3f\" (UID: \"c1a00d6633523da27fead1b7702c41c8\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:10.258795 kubelet[3484]: I0904 17:32:05.840054 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1a00d6633523da27fead1b7702c41c8-k8s-certs\") pod \"kube-controller-manager-ci-3975.2.1-a-eeaffe6a3f\" (UID: \"c1a00d6633523da27fead1b7702c41c8\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:10.258795 kubelet[3484]: I0904 17:32:06.618766 3484 apiserver.go:52] "Watching apiserver" Sep 4 17:32:10.258795 kubelet[3484]: I0904 17:32:06.638310 3484 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:32:10.258795 kubelet[3484]: I0904 17:32:06.699529 3484 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3975.2.1-a-eeaffe6a3f" podStartSLOduration=7.699487567 podCreationTimestamp="2024-09-04 17:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:32:06.699208966 +0000 UTC m=+1.138531222" watchObservedRunningTime="2024-09-04 17:32:06.699487567 +0000 UTC m=+1.138809823" Sep 4 17:32:10.258795 kubelet[3484]: I0904 17:32:06.699632 3484 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3975.2.1-a-eeaffe6a3f" podStartSLOduration=1.699609668 podCreationTimestamp="2024-09-04 17:32:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:32:06.691823021 +0000 UTC m=+1.131145277" watchObservedRunningTime="2024-09-04 17:32:06.699609668 +0000 UTC m=+1.138931824" Sep 4 17:32:10.259014 kubelet[3484]: I0904 17:32:06.714503 3484 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3975.2.1-a-eeaffe6a3f" podStartSLOduration=6.714465391 podCreationTimestamp="2024-09-04 17:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:32:06.707146413 +0000 UTC m=+1.146468569" watchObservedRunningTime="2024-09-04 17:32:06.714465391 +0000 UTC m=+1.153787647" Sep 4 17:32:14.300947 sudo[2523]: pam_unix(sudo:session): session closed for user root Sep 4 17:32:14.412001 sshd[2519]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:14.418771 systemd[1]: sshd@6-10.200.8.37:22-10.200.16.10:35136.service: Deactivated successfully. Sep 4 17:32:14.424131 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:32:14.428550 systemd-logind[1777]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:32:14.430912 systemd-logind[1777]: Removed session 9. Sep 4 17:32:14.724361 kubelet[3484]: I0904 17:32:14.724327 3484 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:32:14.725196 kubelet[3484]: I0904 17:32:14.725019 3484 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:32:14.725293 containerd[1805]: time="2024-09-04T17:32:14.724786881Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:32:15.648678 kubelet[3484]: I0904 17:32:15.645978 3484 topology_manager.go:215] "Topology Admit Handler" podUID="597fed8d-6b58-4497-b09d-fa607ca56254" podNamespace="kube-system" podName="kube-proxy-47nj9" Sep 4 17:32:15.703900 kubelet[3484]: I0904 17:32:15.703715 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/597fed8d-6b58-4497-b09d-fa607ca56254-lib-modules\") pod \"kube-proxy-47nj9\" (UID: \"597fed8d-6b58-4497-b09d-fa607ca56254\") " pod="kube-system/kube-proxy-47nj9" Sep 4 17:32:15.704606 kubelet[3484]: I0904 17:32:15.704564 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzrwq\" (UniqueName: \"kubernetes.io/projected/597fed8d-6b58-4497-b09d-fa607ca56254-kube-api-access-dzrwq\") pod \"kube-proxy-47nj9\" (UID: \"597fed8d-6b58-4497-b09d-fa607ca56254\") " pod="kube-system/kube-proxy-47nj9" Sep 4 17:32:15.706156 kubelet[3484]: I0904 17:32:15.705886 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/597fed8d-6b58-4497-b09d-fa607ca56254-kube-proxy\") pod \"kube-proxy-47nj9\" (UID: \"597fed8d-6b58-4497-b09d-fa607ca56254\") " pod="kube-system/kube-proxy-47nj9" Sep 4 17:32:15.706332 kubelet[3484]: I0904 17:32:15.706314 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/597fed8d-6b58-4497-b09d-fa607ca56254-xtables-lock\") pod \"kube-proxy-47nj9\" (UID: \"597fed8d-6b58-4497-b09d-fa607ca56254\") " pod="kube-system/kube-proxy-47nj9" Sep 4 17:32:15.720993 kubelet[3484]: I0904 17:32:15.720960 3484 topology_manager.go:215] "Topology Admit Handler" podUID="33f558d7-1718-4aa4-8f8d-fb6aa6af42a0" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-66ksx" Sep 4 17:32:15.807162 kubelet[3484]: I0904 17:32:15.807123 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nlzx\" (UniqueName: \"kubernetes.io/projected/33f558d7-1718-4aa4-8f8d-fb6aa6af42a0-kube-api-access-8nlzx\") pod \"tigera-operator-5d56685c77-66ksx\" (UID: \"33f558d7-1718-4aa4-8f8d-fb6aa6af42a0\") " pod="tigera-operator/tigera-operator-5d56685c77-66ksx" Sep 4 17:32:15.807162 kubelet[3484]: I0904 17:32:15.807169 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/33f558d7-1718-4aa4-8f8d-fb6aa6af42a0-var-lib-calico\") pod \"tigera-operator-5d56685c77-66ksx\" (UID: \"33f558d7-1718-4aa4-8f8d-fb6aa6af42a0\") " pod="tigera-operator/tigera-operator-5d56685c77-66ksx" Sep 4 17:32:15.952916 containerd[1805]: time="2024-09-04T17:32:15.952638072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-47nj9,Uid:597fed8d-6b58-4497-b09d-fa607ca56254,Namespace:kube-system,Attempt:0,}" Sep 4 17:32:15.996226 containerd[1805]: time="2024-09-04T17:32:15.996082481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:32:15.996226 containerd[1805]: time="2024-09-04T17:32:15.996136981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:15.996590 containerd[1805]: time="2024-09-04T17:32:15.996288784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:32:15.996590 containerd[1805]: time="2024-09-04T17:32:15.996348384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:16.032427 containerd[1805]: time="2024-09-04T17:32:16.032130185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-66ksx,Uid:33f558d7-1718-4aa4-8f8d-fb6aa6af42a0,Namespace:tigera-operator,Attempt:0,}" Sep 4 17:32:16.040693 containerd[1805]: time="2024-09-04T17:32:16.040608804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-47nj9,Uid:597fed8d-6b58-4497-b09d-fa607ca56254,Namespace:kube-system,Attempt:0,} returns sandbox id \"571603b28cfaf8358e5a198f5dbfffc9832b7db7f596355ecd34c693d0f3c123\"" Sep 4 17:32:16.043779 containerd[1805]: time="2024-09-04T17:32:16.043656247Z" level=info msg="CreateContainer within sandbox \"571603b28cfaf8358e5a198f5dbfffc9832b7db7f596355ecd34c693d0f3c123\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:32:16.090591 containerd[1805]: time="2024-09-04T17:32:16.090494303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:32:16.090884 containerd[1805]: time="2024-09-04T17:32:16.090599204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:16.090884 containerd[1805]: time="2024-09-04T17:32:16.090640105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:32:16.090884 containerd[1805]: time="2024-09-04T17:32:16.090758706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:16.114700 containerd[1805]: time="2024-09-04T17:32:16.114415838Z" level=info msg="CreateContainer within sandbox \"571603b28cfaf8358e5a198f5dbfffc9832b7db7f596355ecd34c693d0f3c123\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"55e3b20a32c2f23e8f31bb46ddaf5959df39d8782b73cd43038a86fc97377268\"" Sep 4 17:32:16.115809 containerd[1805]: time="2024-09-04T17:32:16.115330150Z" level=info msg="StartContainer for \"55e3b20a32c2f23e8f31bb46ddaf5959df39d8782b73cd43038a86fc97377268\"" Sep 4 17:32:16.161541 containerd[1805]: time="2024-09-04T17:32:16.161501597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-66ksx,Uid:33f558d7-1718-4aa4-8f8d-fb6aa6af42a0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"18272d71259f17065da022fd0cc7f677ea7e231efdfa0cdc00557bf4ed33fda2\"" Sep 4 17:32:16.164379 containerd[1805]: time="2024-09-04T17:32:16.164086133Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 17:32:16.186333 containerd[1805]: time="2024-09-04T17:32:16.186204143Z" level=info msg="StartContainer for \"55e3b20a32c2f23e8f31bb46ddaf5959df39d8782b73cd43038a86fc97377268\" returns successfully" Sep 4 17:32:16.708757 kubelet[3484]: I0904 17:32:16.708055 3484 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-47nj9" podStartSLOduration=1.7080146489999999 podCreationTimestamp="2024-09-04 17:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:32:16.707843947 +0000 UTC m=+11.147166103" watchObservedRunningTime="2024-09-04 17:32:16.708014649 +0000 UTC m=+11.147336805" Sep 4 17:32:17.883869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2886005667.mount: Deactivated successfully. Sep 4 17:32:18.441443 containerd[1805]: time="2024-09-04T17:32:18.441393219Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:18.443732 containerd[1805]: time="2024-09-04T17:32:18.443674851Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136521" Sep 4 17:32:18.447881 containerd[1805]: time="2024-09-04T17:32:18.447815509Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:18.452485 containerd[1805]: time="2024-09-04T17:32:18.452431174Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:18.453707 containerd[1805]: time="2024-09-04T17:32:18.453127283Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.28899845s" Sep 4 17:32:18.453707 containerd[1805]: time="2024-09-04T17:32:18.453168984Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Sep 4 17:32:18.455206 containerd[1805]: time="2024-09-04T17:32:18.455169212Z" level=info msg="CreateContainer within sandbox \"18272d71259f17065da022fd0cc7f677ea7e231efdfa0cdc00557bf4ed33fda2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 17:32:18.492120 containerd[1805]: time="2024-09-04T17:32:18.492083729Z" level=info msg="CreateContainer within sandbox \"18272d71259f17065da022fd0cc7f677ea7e231efdfa0cdc00557bf4ed33fda2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2bf0a335318979556267113b861d048a40d5158e66a2fbf2b9818cd36ee8382b\"" Sep 4 17:32:18.493038 containerd[1805]: time="2024-09-04T17:32:18.492547635Z" level=info msg="StartContainer for \"2bf0a335318979556267113b861d048a40d5158e66a2fbf2b9818cd36ee8382b\"" Sep 4 17:32:18.541521 containerd[1805]: time="2024-09-04T17:32:18.541477420Z" level=info msg="StartContainer for \"2bf0a335318979556267113b861d048a40d5158e66a2fbf2b9818cd36ee8382b\" returns successfully" Sep 4 17:32:21.737441 kubelet[3484]: I0904 17:32:21.737375 3484 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-66ksx" podStartSLOduration=4.446593314 podCreationTimestamp="2024-09-04 17:32:15 +0000 UTC" firstStartedPulling="2024-09-04 17:32:16.162798615 +0000 UTC m=+10.602120771" lastFinishedPulling="2024-09-04 17:32:18.453502089 +0000 UTC m=+12.892824345" observedRunningTime="2024-09-04 17:32:18.713060523 +0000 UTC m=+13.152382779" watchObservedRunningTime="2024-09-04 17:32:21.737296888 +0000 UTC m=+16.176619044" Sep 4 17:32:21.741087 kubelet[3484]: I0904 17:32:21.738570 3484 topology_manager.go:215] "Topology Admit Handler" podUID="f9ecbb56-ec1e-45a2-b7af-2e4404fcda7a" podNamespace="calico-system" podName="calico-typha-78f4496fd-2ftfx" Sep 4 17:32:21.812896 kubelet[3484]: I0904 17:32:21.812860 3484 topology_manager.go:215] "Topology Admit Handler" podUID="d3b8db00-1f43-43b6-abeb-9e808a23870f" podNamespace="calico-system" podName="calico-node-b9jzj" Sep 4 17:32:21.847408 kubelet[3484]: I0904 17:32:21.847320 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3b8db00-1f43-43b6-abeb-9e808a23870f-xtables-lock\") pod \"calico-node-b9jzj\" (UID: \"d3b8db00-1f43-43b6-abeb-9e808a23870f\") " pod="calico-system/calico-node-b9jzj" Sep 4 17:32:21.847646 kubelet[3484]: I0904 17:32:21.847566 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d3b8db00-1f43-43b6-abeb-9e808a23870f-node-certs\") pod \"calico-node-b9jzj\" (UID: \"d3b8db00-1f43-43b6-abeb-9e808a23870f\") " pod="calico-system/calico-node-b9jzj" Sep 4 17:32:21.847646 kubelet[3484]: I0904 17:32:21.847626 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8ntw\" (UniqueName: \"kubernetes.io/projected/d3b8db00-1f43-43b6-abeb-9e808a23870f-kube-api-access-d8ntw\") pod \"calico-node-b9jzj\" (UID: \"d3b8db00-1f43-43b6-abeb-9e808a23870f\") " pod="calico-system/calico-node-b9jzj" Sep 4 17:32:21.847740 kubelet[3484]: I0904 17:32:21.847657 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3b8db00-1f43-43b6-abeb-9e808a23870f-lib-modules\") pod \"calico-node-b9jzj\" (UID: \"d3b8db00-1f43-43b6-abeb-9e808a23870f\") " pod="calico-system/calico-node-b9jzj" Sep 4 17:32:21.847740 kubelet[3484]: I0904 17:32:21.847724 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d3b8db00-1f43-43b6-abeb-9e808a23870f-policysync\") pod \"calico-node-b9jzj\" (UID: \"d3b8db00-1f43-43b6-abeb-9e808a23870f\") " pod="calico-system/calico-node-b9jzj" Sep 4 17:32:21.847832 kubelet[3484]: I0904 17:32:21.847756 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3b8db00-1f43-43b6-abeb-9e808a23870f-tigera-ca-bundle\") pod \"calico-node-b9jzj\" (UID: \"d3b8db00-1f43-43b6-abeb-9e808a23870f\") " pod="calico-system/calico-node-b9jzj" Sep 4 17:32:21.847832 kubelet[3484]: I0904 17:32:21.847804 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9ecbb56-ec1e-45a2-b7af-2e4404fcda7a-tigera-ca-bundle\") pod \"calico-typha-78f4496fd-2ftfx\" (UID: \"f9ecbb56-ec1e-45a2-b7af-2e4404fcda7a\") " pod="calico-system/calico-typha-78f4496fd-2ftfx" Sep 4 17:32:21.847912 kubelet[3484]: I0904 17:32:21.847866 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d3b8db00-1f43-43b6-abeb-9e808a23870f-flexvol-driver-host\") pod \"calico-node-b9jzj\" (UID: \"d3b8db00-1f43-43b6-abeb-9e808a23870f\") " pod="calico-system/calico-node-b9jzj" Sep 4 17:32:21.847912 kubelet[3484]: I0904 17:32:21.847896 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnc8j\" (UniqueName: \"kubernetes.io/projected/f9ecbb56-ec1e-45a2-b7af-2e4404fcda7a-kube-api-access-wnc8j\") pod \"calico-typha-78f4496fd-2ftfx\" (UID: \"f9ecbb56-ec1e-45a2-b7af-2e4404fcda7a\") " pod="calico-system/calico-typha-78f4496fd-2ftfx" Sep 4 17:32:21.847992 kubelet[3484]: I0904 17:32:21.847950 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d3b8db00-1f43-43b6-abeb-9e808a23870f-var-run-calico\") pod \"calico-node-b9jzj\" (UID: \"d3b8db00-1f43-43b6-abeb-9e808a23870f\") " pod="calico-system/calico-node-b9jzj" Sep 4 17:32:21.848042 kubelet[3484]: I0904 17:32:21.848006 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d3b8db00-1f43-43b6-abeb-9e808a23870f-cni-net-dir\") pod \"calico-node-b9jzj\" (UID: \"d3b8db00-1f43-43b6-abeb-9e808a23870f\") " pod="calico-system/calico-node-b9jzj" Sep 4 17:32:21.848081 kubelet[3484]: I0904 17:32:21.848035 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f9ecbb56-ec1e-45a2-b7af-2e4404fcda7a-typha-certs\") pod \"calico-typha-78f4496fd-2ftfx\" (UID: \"f9ecbb56-ec1e-45a2-b7af-2e4404fcda7a\") " pod="calico-system/calico-typha-78f4496fd-2ftfx" Sep 4 17:32:21.848124 kubelet[3484]: I0904 17:32:21.848111 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d3b8db00-1f43-43b6-abeb-9e808a23870f-var-lib-calico\") pod \"calico-node-b9jzj\" (UID: \"d3b8db00-1f43-43b6-abeb-9e808a23870f\") " pod="calico-system/calico-node-b9jzj" Sep 4 17:32:21.848165 kubelet[3484]: I0904 17:32:21.848141 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d3b8db00-1f43-43b6-abeb-9e808a23870f-cni-bin-dir\") pod \"calico-node-b9jzj\" (UID: \"d3b8db00-1f43-43b6-abeb-9e808a23870f\") " pod="calico-system/calico-node-b9jzj" Sep 4 17:32:21.848275 kubelet[3484]: I0904 17:32:21.848253 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d3b8db00-1f43-43b6-abeb-9e808a23870f-cni-log-dir\") pod \"calico-node-b9jzj\" (UID: \"d3b8db00-1f43-43b6-abeb-9e808a23870f\") " pod="calico-system/calico-node-b9jzj" Sep 4 17:32:21.957753 kubelet[3484]: E0904 17:32:21.957596 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:21.957753 kubelet[3484]: W0904 17:32:21.957617 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:21.958426 kubelet[3484]: E0904 17:32:21.958223 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:21.958581 kubelet[3484]: E0904 17:32:21.958404 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:21.958581 kubelet[3484]: W0904 17:32:21.958557 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:21.958952 kubelet[3484]: E0904 17:32:21.958848 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:21.959506 kubelet[3484]: E0904 17:32:21.959344 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:21.959506 kubelet[3484]: W0904 17:32:21.959358 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:21.960584 kubelet[3484]: E0904 17:32:21.960497 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:21.966210 kubelet[3484]: E0904 17:32:21.966194 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:21.966500 kubelet[3484]: W0904 17:32:21.966401 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:21.968057 kubelet[3484]: E0904 17:32:21.967725 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:21.973265 kubelet[3484]: I0904 17:32:21.973144 3484 topology_manager.go:215] "Topology Admit Handler" podUID="0887fe36-7732-4bbf-b901-deca836854e8" podNamespace="calico-system" podName="csi-node-driver-d9dpf" Sep 4 17:32:21.975044 kubelet[3484]: E0904 17:32:21.973671 3484 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d9dpf" podUID="0887fe36-7732-4bbf-b901-deca836854e8" Sep 4 17:32:21.975044 kubelet[3484]: E0904 17:32:21.973976 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:21.975044 kubelet[3484]: W0904 17:32:21.973987 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:21.975044 kubelet[3484]: E0904 17:32:21.974004 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:21.975419 kubelet[3484]: E0904 17:32:21.975405 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:21.977011 kubelet[3484]: W0904 17:32:21.976947 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:21.977488 kubelet[3484]: E0904 17:32:21.977424 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:21.989383 kubelet[3484]: E0904 17:32:21.989312 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:21.989615 kubelet[3484]: W0904 17:32:21.989474 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:21.989615 kubelet[3484]: E0904 17:32:21.989501 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:21.993252 kubelet[3484]: E0904 17:32:21.992892 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:21.993252 kubelet[3484]: W0904 17:32:21.992908 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:21.997347 kubelet[3484]: E0904 17:32:21.997318 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:21.997569 kubelet[3484]: E0904 17:32:21.997482 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:21.997864 kubelet[3484]: W0904 17:32:21.997792 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:21.998333 kubelet[3484]: E0904 17:32:21.998269 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:21.998658 kubelet[3484]: E0904 17:32:21.998644 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:21.999465 kubelet[3484]: W0904 17:32:21.999394 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:21.999912 kubelet[3484]: E0904 17:32:21.999897 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.000269 kubelet[3484]: E0904 17:32:22.000186 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.000269 kubelet[3484]: W0904 17:32:22.000216 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.000547 kubelet[3484]: E0904 17:32:22.000511 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.000797 kubelet[3484]: E0904 17:32:22.000785 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.000962 kubelet[3484]: W0904 17:32:22.000884 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.001102 kubelet[3484]: E0904 17:32:22.001062 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.001490 kubelet[3484]: E0904 17:32:22.001433 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.001490 kubelet[3484]: W0904 17:32:22.001449 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.001970 kubelet[3484]: E0904 17:32:22.001876 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.003003 kubelet[3484]: E0904 17:32:22.002876 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.003275 kubelet[3484]: W0904 17:32:22.002890 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.005367 kubelet[3484]: E0904 17:32:22.004609 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.006427 kubelet[3484]: E0904 17:32:22.005957 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.006427 kubelet[3484]: W0904 17:32:22.005972 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.006427 kubelet[3484]: E0904 17:32:22.006183 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.006427 kubelet[3484]: W0904 17:32:22.006195 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.006427 kubelet[3484]: E0904 17:32:22.006391 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.006427 kubelet[3484]: W0904 17:32:22.006402 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.007869 kubelet[3484]: E0904 17:32:22.007376 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.007869 kubelet[3484]: E0904 17:32:22.007391 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.007869 kubelet[3484]: E0904 17:32:22.007377 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.007869 kubelet[3484]: W0904 17:32:22.007422 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.007869 kubelet[3484]: E0904 17:32:22.007432 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.007869 kubelet[3484]: E0904 17:32:22.007450 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.009597 kubelet[3484]: E0904 17:32:22.009307 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.009597 kubelet[3484]: W0904 17:32:22.009323 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.009597 kubelet[3484]: E0904 17:32:22.009565 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.009917 kubelet[3484]: E0904 17:32:22.009801 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.009917 kubelet[3484]: W0904 17:32:22.009814 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.010431 kubelet[3484]: E0904 17:32:22.010312 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.011384 kubelet[3484]: E0904 17:32:22.011300 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.011384 kubelet[3484]: W0904 17:32:22.011317 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.011765 kubelet[3484]: E0904 17:32:22.011708 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.011765 kubelet[3484]: W0904 17:32:22.011722 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.012205 kubelet[3484]: E0904 17:32:22.012005 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.013162 kubelet[3484]: E0904 17:32:22.013148 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.013331 kubelet[3484]: W0904 17:32:22.013280 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.014320 kubelet[3484]: E0904 17:32:22.014109 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.014660 kubelet[3484]: W0904 17:32:22.014432 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.014660 kubelet[3484]: E0904 17:32:22.014461 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.014873 kubelet[3484]: E0904 17:32:22.014797 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.015483 kubelet[3484]: E0904 17:32:22.015362 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.020931 kubelet[3484]: W0904 17:32:22.020727 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.020931 kubelet[3484]: E0904 17:32:22.020761 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.021251 kubelet[3484]: E0904 17:32:22.021150 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.021251 kubelet[3484]: W0904 17:32:22.021190 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.021251 kubelet[3484]: E0904 17:32:22.021210 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.021595 kubelet[3484]: E0904 17:32:22.015595 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.030207 kubelet[3484]: E0904 17:32:22.029740 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.031798 kubelet[3484]: W0904 17:32:22.029754 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.031798 kubelet[3484]: E0904 17:32:22.031753 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.035786 kubelet[3484]: E0904 17:32:22.035720 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.035786 kubelet[3484]: W0904 17:32:22.035745 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.035786 kubelet[3484]: E0904 17:32:22.035762 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.043671 kubelet[3484]: E0904 17:32:22.042110 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.043671 kubelet[3484]: W0904 17:32:22.042125 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.043671 kubelet[3484]: E0904 17:32:22.042143 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.043671 kubelet[3484]: E0904 17:32:22.042846 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.043671 kubelet[3484]: W0904 17:32:22.042963 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.043671 kubelet[3484]: E0904 17:32:22.042984 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.043671 kubelet[3484]: E0904 17:32:22.043527 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.043671 kubelet[3484]: W0904 17:32:22.043550 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.047337 kubelet[3484]: E0904 17:32:22.043567 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.047337 kubelet[3484]: E0904 17:32:22.045552 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.047337 kubelet[3484]: W0904 17:32:22.045565 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.047337 kubelet[3484]: E0904 17:32:22.047278 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.049143 kubelet[3484]: E0904 17:32:22.048676 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.049143 kubelet[3484]: W0904 17:32:22.048708 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.049143 kubelet[3484]: E0904 17:32:22.048727 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.049143 kubelet[3484]: E0904 17:32:22.048997 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.049143 kubelet[3484]: W0904 17:32:22.049008 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.049143 kubelet[3484]: E0904 17:32:22.049043 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.050728 kubelet[3484]: E0904 17:32:22.050345 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.050728 kubelet[3484]: W0904 17:32:22.050365 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.050728 kubelet[3484]: E0904 17:32:22.050408 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.051722 kubelet[3484]: E0904 17:32:22.051604 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.051722 kubelet[3484]: W0904 17:32:22.051617 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.051722 kubelet[3484]: E0904 17:32:22.051654 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.052180 kubelet[3484]: E0904 17:32:22.052073 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.052180 kubelet[3484]: W0904 17:32:22.052086 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.052180 kubelet[3484]: E0904 17:32:22.052104 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.053623 kubelet[3484]: E0904 17:32:22.053604 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.053623 kubelet[3484]: W0904 17:32:22.053622 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.053774 kubelet[3484]: E0904 17:32:22.053641 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.055391 kubelet[3484]: E0904 17:32:22.055277 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.055391 kubelet[3484]: W0904 17:32:22.055295 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.055391 kubelet[3484]: E0904 17:32:22.055312 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.055959 kubelet[3484]: E0904 17:32:22.055941 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.055959 kubelet[3484]: W0904 17:32:22.055958 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.056083 kubelet[3484]: E0904 17:32:22.055975 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.056904 kubelet[3484]: E0904 17:32:22.056779 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.056904 kubelet[3484]: W0904 17:32:22.056793 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.056904 kubelet[3484]: E0904 17:32:22.056830 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.057448 kubelet[3484]: E0904 17:32:22.057434 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.057667 kubelet[3484]: W0904 17:32:22.057651 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.057893 kubelet[3484]: E0904 17:32:22.057732 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.058463 kubelet[3484]: E0904 17:32:22.058449 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.058697 kubelet[3484]: W0904 17:32:22.058544 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.058697 kubelet[3484]: E0904 17:32:22.058565 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.059085 kubelet[3484]: E0904 17:32:22.058943 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.059085 kubelet[3484]: W0904 17:32:22.058956 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.059085 kubelet[3484]: E0904 17:32:22.058973 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.061020 kubelet[3484]: E0904 17:32:22.060696 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.061020 kubelet[3484]: W0904 17:32:22.060713 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.061020 kubelet[3484]: E0904 17:32:22.060927 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.061214 containerd[1805]: time="2024-09-04T17:32:22.060756677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78f4496fd-2ftfx,Uid:f9ecbb56-ec1e-45a2-b7af-2e4404fcda7a,Namespace:calico-system,Attempt:0,}" Sep 4 17:32:22.062525 kubelet[3484]: E0904 17:32:22.062185 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.062525 kubelet[3484]: W0904 17:32:22.062200 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.062525 kubelet[3484]: E0904 17:32:22.062217 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.063093 kubelet[3484]: E0904 17:32:22.062649 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.063093 kubelet[3484]: W0904 17:32:22.062660 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.063093 kubelet[3484]: E0904 17:32:22.062694 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.063093 kubelet[3484]: E0904 17:32:22.062978 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.063093 kubelet[3484]: W0904 17:32:22.062990 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.063093 kubelet[3484]: E0904 17:32:22.063029 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.064283 kubelet[3484]: E0904 17:32:22.063695 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.064283 kubelet[3484]: W0904 17:32:22.063709 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.064283 kubelet[3484]: E0904 17:32:22.063726 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.064283 kubelet[3484]: I0904 17:32:22.063778 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0887fe36-7732-4bbf-b901-deca836854e8-kubelet-dir\") pod \"csi-node-driver-d9dpf\" (UID: \"0887fe36-7732-4bbf-b901-deca836854e8\") " pod="calico-system/csi-node-driver-d9dpf" Sep 4 17:32:22.064283 kubelet[3484]: E0904 17:32:22.064168 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.064283 kubelet[3484]: W0904 17:32:22.064180 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.064283 kubelet[3484]: E0904 17:32:22.064201 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.064618 kubelet[3484]: I0904 17:32:22.064346 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0887fe36-7732-4bbf-b901-deca836854e8-socket-dir\") pod \"csi-node-driver-d9dpf\" (UID: \"0887fe36-7732-4bbf-b901-deca836854e8\") " pod="calico-system/csi-node-driver-d9dpf" Sep 4 17:32:22.064618 kubelet[3484]: E0904 17:32:22.064564 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.064618 kubelet[3484]: W0904 17:32:22.064574 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.064618 kubelet[3484]: E0904 17:32:22.064610 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.066394 kubelet[3484]: E0904 17:32:22.064946 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.066394 kubelet[3484]: W0904 17:32:22.064957 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.066394 kubelet[3484]: E0904 17:32:22.065016 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.066394 kubelet[3484]: E0904 17:32:22.065557 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.066394 kubelet[3484]: W0904 17:32:22.065569 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.066394 kubelet[3484]: E0904 17:32:22.065587 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.066394 kubelet[3484]: I0904 17:32:22.065639 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrmr8\" (UniqueName: \"kubernetes.io/projected/0887fe36-7732-4bbf-b901-deca836854e8-kube-api-access-mrmr8\") pod \"csi-node-driver-d9dpf\" (UID: \"0887fe36-7732-4bbf-b901-deca836854e8\") " pod="calico-system/csi-node-driver-d9dpf" Sep 4 17:32:22.066394 kubelet[3484]: E0904 17:32:22.065899 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.066394 kubelet[3484]: W0904 17:32:22.065911 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.066759 kubelet[3484]: E0904 17:32:22.065947 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.066759 kubelet[3484]: I0904 17:32:22.065974 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0887fe36-7732-4bbf-b901-deca836854e8-registration-dir\") pod \"csi-node-driver-d9dpf\" (UID: \"0887fe36-7732-4bbf-b901-deca836854e8\") " pod="calico-system/csi-node-driver-d9dpf" Sep 4 17:32:22.066759 kubelet[3484]: E0904 17:32:22.066214 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.066759 kubelet[3484]: W0904 17:32:22.066226 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.066759 kubelet[3484]: E0904 17:32:22.066277 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.066759 kubelet[3484]: I0904 17:32:22.066304 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0887fe36-7732-4bbf-b901-deca836854e8-varrun\") pod \"csi-node-driver-d9dpf\" (UID: \"0887fe36-7732-4bbf-b901-deca836854e8\") " pod="calico-system/csi-node-driver-d9dpf" Sep 4 17:32:22.066759 kubelet[3484]: E0904 17:32:22.066618 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.066759 kubelet[3484]: W0904 17:32:22.066652 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.066759 kubelet[3484]: E0904 17:32:22.066670 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.067138 kubelet[3484]: E0904 17:32:22.066904 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.067138 kubelet[3484]: W0904 17:32:22.066914 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.067138 kubelet[3484]: E0904 17:32:22.066933 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.067300 kubelet[3484]: E0904 17:32:22.067209 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.067300 kubelet[3484]: W0904 17:32:22.067218 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.067300 kubelet[3484]: E0904 17:32:22.067260 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.069579 kubelet[3484]: E0904 17:32:22.067465 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.069579 kubelet[3484]: W0904 17:32:22.067477 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.069579 kubelet[3484]: E0904 17:32:22.067515 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.069579 kubelet[3484]: E0904 17:32:22.067763 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.069579 kubelet[3484]: W0904 17:32:22.067775 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.069579 kubelet[3484]: E0904 17:32:22.067794 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.069579 kubelet[3484]: E0904 17:32:22.068855 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.069579 kubelet[3484]: W0904 17:32:22.068868 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.069579 kubelet[3484]: E0904 17:32:22.068886 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.069579 kubelet[3484]: E0904 17:32:22.069178 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.070033 kubelet[3484]: W0904 17:32:22.069189 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.070033 kubelet[3484]: E0904 17:32:22.069205 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.070033 kubelet[3484]: E0904 17:32:22.069466 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.070033 kubelet[3484]: W0904 17:32:22.069478 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.070033 kubelet[3484]: E0904 17:32:22.069532 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.126674 containerd[1805]: time="2024-09-04T17:32:22.126127722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b9jzj,Uid:d3b8db00-1f43-43b6-abeb-9e808a23870f,Namespace:calico-system,Attempt:0,}" Sep 4 17:32:22.135074 containerd[1805]: time="2024-09-04T17:32:22.134950009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:32:22.135211 containerd[1805]: time="2024-09-04T17:32:22.135087510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:22.135211 containerd[1805]: time="2024-09-04T17:32:22.135127211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:32:22.135211 containerd[1805]: time="2024-09-04T17:32:22.135172611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:22.167609 kubelet[3484]: E0904 17:32:22.167551 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.167609 kubelet[3484]: W0904 17:32:22.167573 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.167609 kubelet[3484]: E0904 17:32:22.167600 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.169750 kubelet[3484]: E0904 17:32:22.168120 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.169750 kubelet[3484]: W0904 17:32:22.168135 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.169750 kubelet[3484]: E0904 17:32:22.168205 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.169750 kubelet[3484]: E0904 17:32:22.168488 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.169750 kubelet[3484]: W0904 17:32:22.168499 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.169750 kubelet[3484]: E0904 17:32:22.168533 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.169750 kubelet[3484]: E0904 17:32:22.168780 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.169750 kubelet[3484]: W0904 17:32:22.168792 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.169750 kubelet[3484]: E0904 17:32:22.168934 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.169750 kubelet[3484]: E0904 17:32:22.169178 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.172615 kubelet[3484]: W0904 17:32:22.169190 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.172615 kubelet[3484]: E0904 17:32:22.169276 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.172615 kubelet[3484]: E0904 17:32:22.169657 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.172615 kubelet[3484]: W0904 17:32:22.169669 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.172615 kubelet[3484]: E0904 17:32:22.169702 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.172615 kubelet[3484]: E0904 17:32:22.170109 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.172615 kubelet[3484]: W0904 17:32:22.170120 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.172615 kubelet[3484]: E0904 17:32:22.170150 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.172615 kubelet[3484]: E0904 17:32:22.170452 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.172615 kubelet[3484]: W0904 17:32:22.170463 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.173004 kubelet[3484]: E0904 17:32:22.170580 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.173004 kubelet[3484]: E0904 17:32:22.170898 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.173004 kubelet[3484]: W0904 17:32:22.170910 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.173004 kubelet[3484]: E0904 17:32:22.171200 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.173004 kubelet[3484]: E0904 17:32:22.171308 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.173004 kubelet[3484]: W0904 17:32:22.171319 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.173004 kubelet[3484]: E0904 17:32:22.171365 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.173004 kubelet[3484]: E0904 17:32:22.171838 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.173004 kubelet[3484]: W0904 17:32:22.171850 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.173004 kubelet[3484]: E0904 17:32:22.171880 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.175752 kubelet[3484]: E0904 17:32:22.172105 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.175752 kubelet[3484]: W0904 17:32:22.172116 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.175752 kubelet[3484]: E0904 17:32:22.172346 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.175752 kubelet[3484]: E0904 17:32:22.172689 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.175752 kubelet[3484]: W0904 17:32:22.172700 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.175752 kubelet[3484]: E0904 17:32:22.172801 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.175752 kubelet[3484]: E0904 17:32:22.173094 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.175752 kubelet[3484]: W0904 17:32:22.173106 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.175752 kubelet[3484]: E0904 17:32:22.173165 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.175752 kubelet[3484]: E0904 17:32:22.173417 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.176823 kubelet[3484]: W0904 17:32:22.173430 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.176823 kubelet[3484]: E0904 17:32:22.173533 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.176823 kubelet[3484]: E0904 17:32:22.175377 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.176823 kubelet[3484]: W0904 17:32:22.175389 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.176823 kubelet[3484]: E0904 17:32:22.175413 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.176823 kubelet[3484]: E0904 17:32:22.175747 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.176823 kubelet[3484]: W0904 17:32:22.175761 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.176823 kubelet[3484]: E0904 17:32:22.175783 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.176823 kubelet[3484]: E0904 17:32:22.176429 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.176823 kubelet[3484]: W0904 17:32:22.176442 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.177695 kubelet[3484]: E0904 17:32:22.177667 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.178693 kubelet[3484]: E0904 17:32:22.177881 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.178693 kubelet[3484]: W0904 17:32:22.177894 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.178693 kubelet[3484]: E0904 17:32:22.177982 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.178693 kubelet[3484]: E0904 17:32:22.178140 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.178693 kubelet[3484]: W0904 17:32:22.178150 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.178693 kubelet[3484]: E0904 17:32:22.178196 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.180110 kubelet[3484]: E0904 17:32:22.179996 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.180110 kubelet[3484]: W0904 17:32:22.180029 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.180110 kubelet[3484]: E0904 17:32:22.180053 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.181677 kubelet[3484]: E0904 17:32:22.181553 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.181677 kubelet[3484]: W0904 17:32:22.181568 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.182035 kubelet[3484]: E0904 17:32:22.181720 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.182189 kubelet[3484]: E0904 17:32:22.182177 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.182352 kubelet[3484]: W0904 17:32:22.182257 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.182501 kubelet[3484]: E0904 17:32:22.182426 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.182897 kubelet[3484]: E0904 17:32:22.182853 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.182897 kubelet[3484]: W0904 17:32:22.182867 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.183113 kubelet[3484]: E0904 17:32:22.182984 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.183499 kubelet[3484]: E0904 17:32:22.183487 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.183689 kubelet[3484]: W0904 17:32:22.183611 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.183689 kubelet[3484]: E0904 17:32:22.183633 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.199852 kubelet[3484]: E0904 17:32:22.199833 3484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.200646 kubelet[3484]: W0904 17:32:22.199863 3484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.200646 kubelet[3484]: E0904 17:32:22.199882 3484 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.219292 containerd[1805]: time="2024-09-04T17:32:22.219187239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:32:22.219605 containerd[1805]: time="2024-09-04T17:32:22.219447942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:22.219605 containerd[1805]: time="2024-09-04T17:32:22.219538743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:32:22.219862 containerd[1805]: time="2024-09-04T17:32:22.219561143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:22.276739 containerd[1805]: time="2024-09-04T17:32:22.274301883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b9jzj,Uid:d3b8db00-1f43-43b6-abeb-9e808a23870f,Namespace:calico-system,Attempt:0,} returns sandbox id \"dfd1102e0ce05921e0b58d51831d369be22e5eea4afcfb1ec07702ab44bbcad6\"" Sep 4 17:32:22.278982 containerd[1805]: time="2024-09-04T17:32:22.278400823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 17:32:22.336619 containerd[1805]: time="2024-09-04T17:32:22.336572697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78f4496fd-2ftfx,Uid:f9ecbb56-ec1e-45a2-b7af-2e4404fcda7a,Namespace:calico-system,Attempt:0,} returns sandbox id \"70fd52d7a222e0f5913521dc471d4c99510df90e2027a98d4fca900a4ef4ff54\"" Sep 4 17:32:23.580193 containerd[1805]: time="2024-09-04T17:32:23.580135057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:23.582531 containerd[1805]: time="2024-09-04T17:32:23.582412180Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Sep 4 17:32:23.588362 containerd[1805]: time="2024-09-04T17:32:23.588173137Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:23.593156 containerd[1805]: time="2024-09-04T17:32:23.593015686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:23.594627 containerd[1805]: time="2024-09-04T17:32:23.594122997Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.315669973s" Sep 4 17:32:23.594627 containerd[1805]: time="2024-09-04T17:32:23.594162497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Sep 4 17:32:23.595582 containerd[1805]: time="2024-09-04T17:32:23.595195208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 17:32:23.596321 containerd[1805]: time="2024-09-04T17:32:23.596281518Z" level=info msg="CreateContainer within sandbox \"dfd1102e0ce05921e0b58d51831d369be22e5eea4afcfb1ec07702ab44bbcad6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:32:23.636049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount446904077.mount: Deactivated successfully. Sep 4 17:32:23.642544 containerd[1805]: time="2024-09-04T17:32:23.642316578Z" level=info msg="CreateContainer within sandbox \"dfd1102e0ce05921e0b58d51831d369be22e5eea4afcfb1ec07702ab44bbcad6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2b470dc5592ead7173d5d82731f644d1bcfd84ac3700f733961891d0afd835ee\"" Sep 4 17:32:23.643300 containerd[1805]: time="2024-09-04T17:32:23.642772582Z" level=info msg="StartContainer for \"2b470dc5592ead7173d5d82731f644d1bcfd84ac3700f733961891d0afd835ee\"" Sep 4 17:32:23.667309 kubelet[3484]: E0904 17:32:23.666489 3484 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d9dpf" podUID="0887fe36-7732-4bbf-b901-deca836854e8" Sep 4 17:32:23.723572 containerd[1805]: time="2024-09-04T17:32:23.723528388Z" level=info msg="StartContainer for \"2b470dc5592ead7173d5d82731f644d1bcfd84ac3700f733961891d0afd835ee\" returns successfully" Sep 4 17:32:23.964697 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b470dc5592ead7173d5d82731f644d1bcfd84ac3700f733961891d0afd835ee-rootfs.mount: Deactivated successfully. Sep 4 17:32:24.528535 containerd[1805]: time="2024-09-04T17:32:24.528471319Z" level=info msg="shim disconnected" id=2b470dc5592ead7173d5d82731f644d1bcfd84ac3700f733961891d0afd835ee namespace=k8s.io Sep 4 17:32:24.528535 containerd[1805]: time="2024-09-04T17:32:24.528531919Z" level=warning msg="cleaning up after shim disconnected" id=2b470dc5592ead7173d5d82731f644d1bcfd84ac3700f733961891d0afd835ee namespace=k8s.io Sep 4 17:32:24.528729 containerd[1805]: time="2024-09-04T17:32:24.528543720Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:32:25.658268 kubelet[3484]: E0904 17:32:25.658130 3484 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d9dpf" podUID="0887fe36-7732-4bbf-b901-deca836854e8" Sep 4 17:32:26.377138 containerd[1805]: time="2024-09-04T17:32:26.376217254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:26.378243 containerd[1805]: time="2024-09-04T17:32:26.378178274Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Sep 4 17:32:26.384046 containerd[1805]: time="2024-09-04T17:32:26.383853430Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:26.387905 containerd[1805]: time="2024-09-04T17:32:26.387733669Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:26.388999 containerd[1805]: time="2024-09-04T17:32:26.388884080Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 2.793651772s" Sep 4 17:32:26.388999 containerd[1805]: time="2024-09-04T17:32:26.388921581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Sep 4 17:32:26.391737 containerd[1805]: time="2024-09-04T17:32:26.390730199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 17:32:26.409868 containerd[1805]: time="2024-09-04T17:32:26.409777689Z" level=info msg="CreateContainer within sandbox \"70fd52d7a222e0f5913521dc471d4c99510df90e2027a98d4fca900a4ef4ff54\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:32:26.452360 containerd[1805]: time="2024-09-04T17:32:26.452318713Z" level=info msg="CreateContainer within sandbox \"70fd52d7a222e0f5913521dc471d4c99510df90e2027a98d4fca900a4ef4ff54\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8c34184cd47aa650c34a3ce7806ad9a07710af820acd6c04fed56f0b8833ce4f\"" Sep 4 17:32:26.452855 containerd[1805]: time="2024-09-04T17:32:26.452831418Z" level=info msg="StartContainer for \"8c34184cd47aa650c34a3ce7806ad9a07710af820acd6c04fed56f0b8833ce4f\"" Sep 4 17:32:26.534436 containerd[1805]: time="2024-09-04T17:32:26.533897127Z" level=info msg="StartContainer for \"8c34184cd47aa650c34a3ce7806ad9a07710af820acd6c04fed56f0b8833ce4f\" returns successfully" Sep 4 17:32:27.661254 kubelet[3484]: E0904 17:32:27.659665 3484 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d9dpf" podUID="0887fe36-7732-4bbf-b901-deca836854e8" Sep 4 17:32:27.741315 kubelet[3484]: I0904 17:32:27.741288 3484 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:32:29.657531 kubelet[3484]: E0904 17:32:29.657501 3484 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d9dpf" podUID="0887fe36-7732-4bbf-b901-deca836854e8" Sep 4 17:32:30.167715 containerd[1805]: time="2024-09-04T17:32:30.167673482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:30.169647 containerd[1805]: time="2024-09-04T17:32:30.169588801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Sep 4 17:32:30.173085 containerd[1805]: time="2024-09-04T17:32:30.173033635Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:30.177692 containerd[1805]: time="2024-09-04T17:32:30.177632181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:30.178465 containerd[1805]: time="2024-09-04T17:32:30.178350788Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 3.787584989s" Sep 4 17:32:30.178465 containerd[1805]: time="2024-09-04T17:32:30.178386588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Sep 4 17:32:30.180423 containerd[1805]: time="2024-09-04T17:32:30.180390208Z" level=info msg="CreateContainer within sandbox \"dfd1102e0ce05921e0b58d51831d369be22e5eea4afcfb1ec07702ab44bbcad6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 17:32:30.221327 containerd[1805]: time="2024-09-04T17:32:30.221290816Z" level=info msg="CreateContainer within sandbox \"dfd1102e0ce05921e0b58d51831d369be22e5eea4afcfb1ec07702ab44bbcad6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bec2b40931c798c1f4b2d312e605d1bb731eccec951884d4a0c285ffc2a73ad2\"" Sep 4 17:32:30.221830 containerd[1805]: time="2024-09-04T17:32:30.221724921Z" level=info msg="StartContainer for \"bec2b40931c798c1f4b2d312e605d1bb731eccec951884d4a0c285ffc2a73ad2\"" Sep 4 17:32:30.283720 containerd[1805]: time="2024-09-04T17:32:30.283590238Z" level=info msg="StartContainer for \"bec2b40931c798c1f4b2d312e605d1bb731eccec951884d4a0c285ffc2a73ad2\" returns successfully" Sep 4 17:32:30.769089 kubelet[3484]: I0904 17:32:30.768964 3484 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-78f4496fd-2ftfx" podStartSLOduration=5.717348188 podCreationTimestamp="2024-09-04 17:32:21 +0000 UTC" firstStartedPulling="2024-09-04 17:32:22.337822309 +0000 UTC m=+16.777144565" lastFinishedPulling="2024-09-04 17:32:26.389394285 +0000 UTC m=+20.828716541" observedRunningTime="2024-09-04 17:32:26.753447618 +0000 UTC m=+21.192769774" watchObservedRunningTime="2024-09-04 17:32:30.768920164 +0000 UTC m=+25.208242320" Sep 4 17:32:31.657792 kubelet[3484]: E0904 17:32:31.657700 3484 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d9dpf" podUID="0887fe36-7732-4bbf-b901-deca836854e8" Sep 4 17:32:31.725204 kubelet[3484]: I0904 17:32:31.722057 3484 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Sep 4 17:32:31.753452 kubelet[3484]: I0904 17:32:31.753105 3484 topology_manager.go:215] "Topology Admit Handler" podUID="11b141d0-410e-4509-a6be-b9306c94a513" podNamespace="kube-system" podName="coredns-5dd5756b68-pvrs8" Sep 4 17:32:31.755344 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bec2b40931c798c1f4b2d312e605d1bb731eccec951884d4a0c285ffc2a73ad2-rootfs.mount: Deactivated successfully. Sep 4 17:32:31.761765 kubelet[3484]: I0904 17:32:31.761395 3484 topology_manager.go:215] "Topology Admit Handler" podUID="06d0bcfd-0047-41da-912c-7db4d68edcca" podNamespace="kube-system" podName="coredns-5dd5756b68-fh77d" Sep 4 17:32:31.761765 kubelet[3484]: I0904 17:32:31.761577 3484 topology_manager.go:215] "Topology Admit Handler" podUID="f0f33dc0-57aa-4818-afaf-a2d00c3723b4" podNamespace="calico-system" podName="calico-kube-controllers-5644997dc9-nplr7" Sep 4 17:32:31.839721 kubelet[3484]: I0904 17:32:31.839647 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf429\" (UniqueName: \"kubernetes.io/projected/11b141d0-410e-4509-a6be-b9306c94a513-kube-api-access-sf429\") pod \"coredns-5dd5756b68-pvrs8\" (UID: \"11b141d0-410e-4509-a6be-b9306c94a513\") " pod="kube-system/coredns-5dd5756b68-pvrs8" Sep 4 17:32:31.840291 kubelet[3484]: I0904 17:32:31.839775 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06d0bcfd-0047-41da-912c-7db4d68edcca-config-volume\") pod \"coredns-5dd5756b68-fh77d\" (UID: \"06d0bcfd-0047-41da-912c-7db4d68edcca\") " pod="kube-system/coredns-5dd5756b68-fh77d" Sep 4 17:32:31.840291 kubelet[3484]: I0904 17:32:31.839858 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpl4f\" (UniqueName: \"kubernetes.io/projected/06d0bcfd-0047-41da-912c-7db4d68edcca-kube-api-access-hpl4f\") pod \"coredns-5dd5756b68-fh77d\" (UID: \"06d0bcfd-0047-41da-912c-7db4d68edcca\") " pod="kube-system/coredns-5dd5756b68-fh77d" Sep 4 17:32:31.840291 kubelet[3484]: I0904 17:32:31.839932 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11b141d0-410e-4509-a6be-b9306c94a513-config-volume\") pod \"coredns-5dd5756b68-pvrs8\" (UID: \"11b141d0-410e-4509-a6be-b9306c94a513\") " pod="kube-system/coredns-5dd5756b68-pvrs8" Sep 4 17:32:31.840291 kubelet[3484]: I0904 17:32:31.839964 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0f33dc0-57aa-4818-afaf-a2d00c3723b4-tigera-ca-bundle\") pod \"calico-kube-controllers-5644997dc9-nplr7\" (UID: \"f0f33dc0-57aa-4818-afaf-a2d00c3723b4\") " pod="calico-system/calico-kube-controllers-5644997dc9-nplr7" Sep 4 17:32:31.840291 kubelet[3484]: I0904 17:32:31.840007 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thvr2\" (UniqueName: \"kubernetes.io/projected/f0f33dc0-57aa-4818-afaf-a2d00c3723b4-kube-api-access-thvr2\") pod \"calico-kube-controllers-5644997dc9-nplr7\" (UID: \"f0f33dc0-57aa-4818-afaf-a2d00c3723b4\") " pod="calico-system/calico-kube-controllers-5644997dc9-nplr7" Sep 4 17:32:32.079579 containerd[1805]: time="2024-09-04T17:32:32.079461024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-pvrs8,Uid:11b141d0-410e-4509-a6be-b9306c94a513,Namespace:kube-system,Attempt:0,}" Sep 4 17:32:32.082093 containerd[1805]: time="2024-09-04T17:32:32.082052149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fh77d,Uid:06d0bcfd-0047-41da-912c-7db4d68edcca,Namespace:kube-system,Attempt:0,}" Sep 4 17:32:32.084668 containerd[1805]: time="2024-09-04T17:32:32.084636173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5644997dc9-nplr7,Uid:f0f33dc0-57aa-4818-afaf-a2d00c3723b4,Namespace:calico-system,Attempt:0,}" Sep 4 17:32:33.661467 containerd[1805]: time="2024-09-04T17:32:33.661423823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d9dpf,Uid:0887fe36-7732-4bbf-b901-deca836854e8,Namespace:calico-system,Attempt:0,}" Sep 4 17:32:40.316718 containerd[1805]: time="2024-09-04T17:32:40.316663028Z" level=error msg="collecting metrics for bec2b40931c798c1f4b2d312e605d1bb731eccec951884d4a0c285ffc2a73ad2" error="cgroups: cgroup deleted: unknown" Sep 4 17:32:41.719041 containerd[1805]: time="2024-09-04T17:32:41.718975017Z" level=error msg="failed to handle container TaskExit event container_id:\"bec2b40931c798c1f4b2d312e605d1bb731eccec951884d4a0c285ffc2a73ad2\" id:\"bec2b40931c798c1f4b2d312e605d1bb731eccec951884d4a0c285ffc2a73ad2\" pid:4177 exited_at:{seconds:1725471151 nanos:717560639}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Sep 4 17:32:41.759420 containerd[1805]: time="2024-09-04T17:32:41.759294836Z" level=error msg="ttrpc: received message on inactive stream" stream=27 Sep 4 17:32:42.997334 containerd[1805]: time="2024-09-04T17:32:42.997283326Z" level=error msg="Failed to destroy network for sandbox \"17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:42.997827 containerd[1805]: time="2024-09-04T17:32:42.997625629Z" level=error msg="encountered an error cleaning up failed sandbox \"17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:42.997827 containerd[1805]: time="2024-09-04T17:32:42.997685729Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-pvrs8,Uid:11b141d0-410e-4509-a6be-b9306c94a513,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:42.997991 kubelet[3484]: E0904 17:32:42.997964 3484 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:42.998409 kubelet[3484]: E0904 17:32:42.998035 3484 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-pvrs8" Sep 4 17:32:42.998409 kubelet[3484]: E0904 17:32:42.998068 3484 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-pvrs8" Sep 4 17:32:42.998409 kubelet[3484]: E0904 17:32:42.998163 3484 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-pvrs8_kube-system(11b141d0-410e-4509-a6be-b9306c94a513)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-pvrs8_kube-system(11b141d0-410e-4509-a6be-b9306c94a513)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-pvrs8" podUID="11b141d0-410e-4509-a6be-b9306c94a513" Sep 4 17:32:43.049809 containerd[1805]: time="2024-09-04T17:32:43.049760441Z" level=error msg="Failed to destroy network for sandbox \"a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:43.050091 containerd[1805]: time="2024-09-04T17:32:43.050058644Z" level=error msg="encountered an error cleaning up failed sandbox \"a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:43.050169 containerd[1805]: time="2024-09-04T17:32:43.050120244Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fh77d,Uid:06d0bcfd-0047-41da-912c-7db4d68edcca,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:43.050413 kubelet[3484]: E0904 17:32:43.050384 3484 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:43.050525 kubelet[3484]: E0904 17:32:43.050453 3484 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-fh77d" Sep 4 17:32:43.050525 kubelet[3484]: E0904 17:32:43.050481 3484 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-fh77d" Sep 4 17:32:43.050615 kubelet[3484]: E0904 17:32:43.050573 3484 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-fh77d_kube-system(06d0bcfd-0047-41da-912c-7db4d68edcca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-fh77d_kube-system(06d0bcfd-0047-41da-912c-7db4d68edcca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-fh77d" podUID="06d0bcfd-0047-41da-912c-7db4d68edcca" Sep 4 17:32:43.086388 containerd[1805]: time="2024-09-04T17:32:43.086185729Z" level=info msg="TaskExit event container_id:\"bec2b40931c798c1f4b2d312e605d1bb731eccec951884d4a0c285ffc2a73ad2\" id:\"bec2b40931c798c1f4b2d312e605d1bb731eccec951884d4a0c285ffc2a73ad2\" pid:4177 exited_at:{seconds:1725471151 nanos:717560639}" Sep 4 17:32:43.087430 containerd[1805]: time="2024-09-04T17:32:43.087320738Z" level=info msg="shim disconnected" id=bec2b40931c798c1f4b2d312e605d1bb731eccec951884d4a0c285ffc2a73ad2 namespace=k8s.io Sep 4 17:32:43.087707 containerd[1805]: time="2024-09-04T17:32:43.087558740Z" level=warning msg="cleaning up after shim disconnected" id=bec2b40931c798c1f4b2d312e605d1bb731eccec951884d4a0c285ffc2a73ad2 namespace=k8s.io Sep 4 17:32:43.087707 containerd[1805]: time="2024-09-04T17:32:43.087578040Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:32:43.107326 containerd[1805]: time="2024-09-04T17:32:43.107280296Z" level=info msg="Ensure that container bec2b40931c798c1f4b2d312e605d1bb731eccec951884d4a0c285ffc2a73ad2 in task-service has been cleanup successfully" Sep 4 17:32:43.107489 containerd[1805]: time="2024-09-04T17:32:43.107283996Z" level=error msg="Failed to destroy network for sandbox \"c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:43.109253 containerd[1805]: time="2024-09-04T17:32:43.107973101Z" level=error msg="encountered an error cleaning up failed sandbox \"c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:43.109253 containerd[1805]: time="2024-09-04T17:32:43.108066202Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5644997dc9-nplr7,Uid:f0f33dc0-57aa-4818-afaf-a2d00c3723b4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:43.111509 kubelet[3484]: E0904 17:32:43.108558 3484 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:43.111509 kubelet[3484]: E0904 17:32:43.108620 3484 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5644997dc9-nplr7" Sep 4 17:32:43.111509 kubelet[3484]: E0904 17:32:43.108646 3484 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5644997dc9-nplr7" Sep 4 17:32:43.111708 kubelet[3484]: E0904 17:32:43.108722 3484 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5644997dc9-nplr7_calico-system(f0f33dc0-57aa-4818-afaf-a2d00c3723b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5644997dc9-nplr7_calico-system(f0f33dc0-57aa-4818-afaf-a2d00c3723b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5644997dc9-nplr7" podUID="f0f33dc0-57aa-4818-afaf-a2d00c3723b4" Sep 4 17:32:43.202357 containerd[1805]: time="2024-09-04T17:32:43.202304647Z" level=error msg="Failed to destroy network for sandbox \"86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:43.202676 containerd[1805]: time="2024-09-04T17:32:43.202642050Z" level=error msg="encountered an error cleaning up failed sandbox \"86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:43.202770 containerd[1805]: time="2024-09-04T17:32:43.202723851Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d9dpf,Uid:0887fe36-7732-4bbf-b901-deca836854e8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:43.203015 kubelet[3484]: E0904 17:32:43.202988 3484 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:43.203116 kubelet[3484]: E0904 17:32:43.203053 3484 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d9dpf" Sep 4 17:32:43.203116 kubelet[3484]: E0904 17:32:43.203080 3484 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d9dpf" Sep 4 17:32:43.203208 kubelet[3484]: E0904 17:32:43.203174 3484 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d9dpf_calico-system(0887fe36-7732-4bbf-b901-deca836854e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d9dpf_calico-system(0887fe36-7732-4bbf-b901-deca836854e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d9dpf" podUID="0887fe36-7732-4bbf-b901-deca836854e8" Sep 4 17:32:43.771911 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4-shm.mount: Deactivated successfully. Sep 4 17:32:43.772079 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1-shm.mount: Deactivated successfully. Sep 4 17:32:43.772206 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474-shm.mount: Deactivated successfully. Sep 4 17:32:43.796751 kubelet[3484]: I0904 17:32:43.795906 3484 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" Sep 4 17:32:43.797195 containerd[1805]: time="2024-09-04T17:32:43.797161151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 17:32:43.801623 containerd[1805]: time="2024-09-04T17:32:43.801349585Z" level=info msg="StopPodSandbox for \"86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709\"" Sep 4 17:32:43.802614 containerd[1805]: time="2024-09-04T17:32:43.802590594Z" level=info msg="Ensure that sandbox 86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709 in task-service has been cleanup successfully" Sep 4 17:32:43.803957 kubelet[3484]: I0904 17:32:43.802868 3484 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" Sep 4 17:32:43.805363 containerd[1805]: time="2024-09-04T17:32:43.804943813Z" level=info msg="StopPodSandbox for \"c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4\"" Sep 4 17:32:43.805678 containerd[1805]: time="2024-09-04T17:32:43.805653719Z" level=info msg="Ensure that sandbox c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4 in task-service has been cleanup successfully" Sep 4 17:32:43.808109 kubelet[3484]: I0904 17:32:43.808090 3484 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" Sep 4 17:32:43.809916 containerd[1805]: time="2024-09-04T17:32:43.808906944Z" level=info msg="StopPodSandbox for \"a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1\"" Sep 4 17:32:43.809916 containerd[1805]: time="2024-09-04T17:32:43.809103246Z" level=info msg="Ensure that sandbox a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1 in task-service has been cleanup successfully" Sep 4 17:32:43.812101 kubelet[3484]: I0904 17:32:43.812086 3484 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" Sep 4 17:32:43.820108 containerd[1805]: time="2024-09-04T17:32:43.819388727Z" level=info msg="StopPodSandbox for \"17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474\"" Sep 4 17:32:43.820108 containerd[1805]: time="2024-09-04T17:32:43.819601929Z" level=info msg="Ensure that sandbox 17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474 in task-service has been cleanup successfully" Sep 4 17:32:43.887292 containerd[1805]: time="2024-09-04T17:32:43.887224164Z" level=error msg="StopPodSandbox for \"86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709\" failed" error="failed to destroy network for sandbox \"86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:43.888253 kubelet[3484]: E0904 17:32:43.887805 3484 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" Sep 4 17:32:43.888253 kubelet[3484]: E0904 17:32:43.887931 3484 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709"} Sep 4 17:32:43.888253 kubelet[3484]: E0904 17:32:43.887985 3484 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0887fe36-7732-4bbf-b901-deca836854e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:32:43.888253 kubelet[3484]: E0904 17:32:43.888026 3484 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0887fe36-7732-4bbf-b901-deca836854e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d9dpf" podUID="0887fe36-7732-4bbf-b901-deca836854e8" Sep 4 17:32:43.889769 containerd[1805]: time="2024-09-04T17:32:43.889564782Z" level=error msg="StopPodSandbox for \"c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4\" failed" error="failed to destroy network for sandbox \"c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:43.891278 kubelet[3484]: E0904 17:32:43.890380 3484 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" Sep 4 17:32:43.891278 kubelet[3484]: E0904 17:32:43.890416 3484 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4"} Sep 4 17:32:43.891278 kubelet[3484]: E0904 17:32:43.890463 3484 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f0f33dc0-57aa-4818-afaf-a2d00c3723b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:32:43.891278 kubelet[3484]: E0904 17:32:43.890499 3484 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f0f33dc0-57aa-4818-afaf-a2d00c3723b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5644997dc9-nplr7" podUID="f0f33dc0-57aa-4818-afaf-a2d00c3723b4" Sep 4 17:32:43.896438 containerd[1805]: time="2024-09-04T17:32:43.896288035Z" level=error msg="StopPodSandbox for \"17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474\" failed" error="failed to destroy network for sandbox \"17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:43.897240 kubelet[3484]: E0904 17:32:43.897082 3484 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" Sep 4 17:32:43.897240 kubelet[3484]: E0904 17:32:43.897130 3484 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474"} Sep 4 17:32:43.897240 kubelet[3484]: E0904 17:32:43.897177 3484 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"11b141d0-410e-4509-a6be-b9306c94a513\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:32:43.897240 kubelet[3484]: E0904 17:32:43.897212 3484 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"11b141d0-410e-4509-a6be-b9306c94a513\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-pvrs8" podUID="11b141d0-410e-4509-a6be-b9306c94a513" Sep 4 17:32:43.900095 containerd[1805]: time="2024-09-04T17:32:43.900002465Z" level=error msg="StopPodSandbox for \"a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1\" failed" error="failed to destroy network for sandbox \"a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:43.900218 kubelet[3484]: E0904 17:32:43.900202 3484 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" Sep 4 17:32:43.900305 kubelet[3484]: E0904 17:32:43.900248 3484 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1"} Sep 4 17:32:43.900305 kubelet[3484]: E0904 17:32:43.900296 3484 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"06d0bcfd-0047-41da-912c-7db4d68edcca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:32:43.900412 kubelet[3484]: E0904 17:32:43.900328 3484 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"06d0bcfd-0047-41da-912c-7db4d68edcca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-fh77d" podUID="06d0bcfd-0047-41da-912c-7db4d68edcca" Sep 4 17:32:46.200369 kubelet[3484]: I0904 17:32:46.199930 3484 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:32:53.384637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2493387306.mount: Deactivated successfully. Sep 4 17:32:53.433614 containerd[1805]: time="2024-09-04T17:32:53.432169621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:53.435103 containerd[1805]: time="2024-09-04T17:32:53.434845946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Sep 4 17:32:53.439929 containerd[1805]: time="2024-09-04T17:32:53.439116786Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:53.446170 containerd[1805]: time="2024-09-04T17:32:53.444894140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:53.446170 containerd[1805]: time="2024-09-04T17:32:53.445749048Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 9.648532596s" Sep 4 17:32:53.446170 containerd[1805]: time="2024-09-04T17:32:53.445782449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Sep 4 17:32:53.465691 containerd[1805]: time="2024-09-04T17:32:53.465657436Z" level=info msg="CreateContainer within sandbox \"dfd1102e0ce05921e0b58d51831d369be22e5eea4afcfb1ec07702ab44bbcad6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 17:32:53.525149 containerd[1805]: time="2024-09-04T17:32:53.525089995Z" level=info msg="CreateContainer within sandbox \"dfd1102e0ce05921e0b58d51831d369be22e5eea4afcfb1ec07702ab44bbcad6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"65f4a9f03c943a660ee7c2ddde0332c91fe22c96d4fe940dc53fb0665768ed52\"" Sep 4 17:32:53.526198 containerd[1805]: time="2024-09-04T17:32:53.525673601Z" level=info msg="StartContainer for \"65f4a9f03c943a660ee7c2ddde0332c91fe22c96d4fe940dc53fb0665768ed52\"" Sep 4 17:32:53.589146 containerd[1805]: time="2024-09-04T17:32:53.589097897Z" level=info msg="StartContainer for \"65f4a9f03c943a660ee7c2ddde0332c91fe22c96d4fe940dc53fb0665768ed52\" returns successfully" Sep 4 17:32:53.857922 kubelet[3484]: I0904 17:32:53.857889 3484 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-b9jzj" podStartSLOduration=1.68928509 podCreationTimestamp="2024-09-04 17:32:21 +0000 UTC" firstStartedPulling="2024-09-04 17:32:22.277890718 +0000 UTC m=+16.717212874" lastFinishedPulling="2024-09-04 17:32:53.446444855 +0000 UTC m=+47.885767011" observedRunningTime="2024-09-04 17:32:53.855607106 +0000 UTC m=+48.294929362" watchObservedRunningTime="2024-09-04 17:32:53.857839227 +0000 UTC m=+48.297161383" Sep 4 17:32:53.954271 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 17:32:53.954442 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 17:32:54.660770 containerd[1805]: time="2024-09-04T17:32:54.659175369Z" level=info msg="StopPodSandbox for \"c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4\"" Sep 4 17:32:54.661849 containerd[1805]: time="2024-09-04T17:32:54.661466290Z" level=info msg="StopPodSandbox for \"17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474\"" Sep 4 17:32:54.763897 containerd[1805]: 2024-09-04 17:32:54.719 [INFO][4544] k8s.go 608: Cleaning up netns ContainerID="17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" Sep 4 17:32:54.763897 containerd[1805]: 2024-09-04 17:32:54.721 [INFO][4544] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" iface="eth0" netns="/var/run/netns/cni-f3a262a3-f975-23d2-7cd4-91401e2a0208" Sep 4 17:32:54.763897 containerd[1805]: 2024-09-04 17:32:54.723 [INFO][4544] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" iface="eth0" netns="/var/run/netns/cni-f3a262a3-f975-23d2-7cd4-91401e2a0208" Sep 4 17:32:54.763897 containerd[1805]: 2024-09-04 17:32:54.723 [INFO][4544] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" iface="eth0" netns="/var/run/netns/cni-f3a262a3-f975-23d2-7cd4-91401e2a0208" Sep 4 17:32:54.763897 containerd[1805]: 2024-09-04 17:32:54.724 [INFO][4544] k8s.go 615: Releasing IP address(es) ContainerID="17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" Sep 4 17:32:54.763897 containerd[1805]: 2024-09-04 17:32:54.724 [INFO][4544] utils.go 188: Calico CNI releasing IP address ContainerID="17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" Sep 4 17:32:54.763897 containerd[1805]: 2024-09-04 17:32:54.753 [INFO][4554] ipam_plugin.go 417: Releasing address using handleID ContainerID="17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" HandleID="k8s-pod-network.17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--pvrs8-eth0" Sep 4 17:32:54.763897 containerd[1805]: 2024-09-04 17:32:54.753 [INFO][4554] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:32:54.763897 containerd[1805]: 2024-09-04 17:32:54.753 [INFO][4554] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:32:54.763897 containerd[1805]: 2024-09-04 17:32:54.758 [WARNING][4554] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" HandleID="k8s-pod-network.17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--pvrs8-eth0" Sep 4 17:32:54.763897 containerd[1805]: 2024-09-04 17:32:54.758 [INFO][4554] ipam_plugin.go 445: Releasing address using workloadID ContainerID="17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" HandleID="k8s-pod-network.17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--pvrs8-eth0" Sep 4 17:32:54.763897 containerd[1805]: 2024-09-04 17:32:54.760 [INFO][4554] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:32:54.763897 containerd[1805]: 2024-09-04 17:32:54.762 [INFO][4544] k8s.go 621: Teardown processing complete. ContainerID="17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" Sep 4 17:32:54.765302 containerd[1805]: time="2024-09-04T17:32:54.764800163Z" level=info msg="TearDown network for sandbox \"17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474\" successfully" Sep 4 17:32:54.767416 containerd[1805]: time="2024-09-04T17:32:54.767353287Z" level=info msg="StopPodSandbox for \"17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474\" returns successfully" Sep 4 17:32:54.769145 systemd[1]: run-netns-cni\x2df3a262a3\x2df975\x2d23d2\x2d7cd4\x2d91401e2a0208.mount: Deactivated successfully. Sep 4 17:32:54.771580 containerd[1805]: time="2024-09-04T17:32:54.771445525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-pvrs8,Uid:11b141d0-410e-4509-a6be-b9306c94a513,Namespace:kube-system,Attempt:1,}" Sep 4 17:32:54.781050 containerd[1805]: 2024-09-04 17:32:54.722 [INFO][4537] k8s.go 608: Cleaning up netns ContainerID="c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" Sep 4 17:32:54.781050 containerd[1805]: 2024-09-04 17:32:54.722 [INFO][4537] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" iface="eth0" netns="/var/run/netns/cni-8d3564f3-67fa-1a21-60c6-88a19be7b7ad" Sep 4 17:32:54.781050 containerd[1805]: 2024-09-04 17:32:54.723 [INFO][4537] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" iface="eth0" netns="/var/run/netns/cni-8d3564f3-67fa-1a21-60c6-88a19be7b7ad" Sep 4 17:32:54.781050 containerd[1805]: 2024-09-04 17:32:54.724 [INFO][4537] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" iface="eth0" netns="/var/run/netns/cni-8d3564f3-67fa-1a21-60c6-88a19be7b7ad" Sep 4 17:32:54.781050 containerd[1805]: 2024-09-04 17:32:54.724 [INFO][4537] k8s.go 615: Releasing IP address(es) ContainerID="c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" Sep 4 17:32:54.781050 containerd[1805]: 2024-09-04 17:32:54.724 [INFO][4537] utils.go 188: Calico CNI releasing IP address ContainerID="c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" Sep 4 17:32:54.781050 containerd[1805]: 2024-09-04 17:32:54.753 [INFO][4553] ipam_plugin.go 417: Releasing address using handleID ContainerID="c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" HandleID="k8s-pod-network.c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--kube--controllers--5644997dc9--nplr7-eth0" Sep 4 17:32:54.781050 containerd[1805]: 2024-09-04 17:32:54.753 [INFO][4553] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:32:54.781050 containerd[1805]: 2024-09-04 17:32:54.760 [INFO][4553] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:32:54.781050 containerd[1805]: 2024-09-04 17:32:54.769 [WARNING][4553] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" HandleID="k8s-pod-network.c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--kube--controllers--5644997dc9--nplr7-eth0" Sep 4 17:32:54.781050 containerd[1805]: 2024-09-04 17:32:54.769 [INFO][4553] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" HandleID="k8s-pod-network.c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--kube--controllers--5644997dc9--nplr7-eth0" Sep 4 17:32:54.781050 containerd[1805]: 2024-09-04 17:32:54.777 [INFO][4553] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:32:54.781050 containerd[1805]: 2024-09-04 17:32:54.779 [INFO][4537] k8s.go 621: Teardown processing complete. ContainerID="c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" Sep 4 17:32:54.784695 containerd[1805]: time="2024-09-04T17:32:54.781312092Z" level=info msg="TearDown network for sandbox \"c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4\" successfully" Sep 4 17:32:54.784695 containerd[1805]: time="2024-09-04T17:32:54.781334492Z" level=info msg="StopPodSandbox for \"c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4\" returns successfully" Sep 4 17:32:54.785637 containerd[1805]: time="2024-09-04T17:32:54.785170293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5644997dc9-nplr7,Uid:f0f33dc0-57aa-4818-afaf-a2d00c3723b4,Namespace:calico-system,Attempt:1,}" Sep 4 17:32:54.787947 systemd[1]: run-netns-cni\x2d8d3564f3\x2d67fa\x2d1a21\x2d60c6\x2d88a19be7b7ad.mount: Deactivated successfully. Sep 4 17:32:55.056692 systemd-networkd[1373]: cali604632e5446: Link UP Sep 4 17:32:55.056919 systemd-networkd[1373]: cali604632e5446: Gained carrier Sep 4 17:32:55.075064 systemd-networkd[1373]: cali1a3e6f5c76e: Link UP Sep 4 17:32:55.075744 systemd-networkd[1373]: cali1a3e6f5c76e: Gained carrier Sep 4 17:32:55.084589 containerd[1805]: 2024-09-04 17:32:54.898 [INFO][4565] utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 17:32:55.084589 containerd[1805]: 2024-09-04 17:32:54.918 [INFO][4565] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--pvrs8-eth0 coredns-5dd5756b68- kube-system 11b141d0-410e-4509-a6be-b9306c94a513 694 0 2024-09-04 17:32:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.2.1-a-eeaffe6a3f coredns-5dd5756b68-pvrs8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali604632e5446 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9fd525a1ee6f682edbb7b9822c9b836834b92746be75174aaf7625a6fef827de" Namespace="kube-system" Pod="coredns-5dd5756b68-pvrs8" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--pvrs8-" Sep 4 17:32:55.084589 containerd[1805]: 2024-09-04 17:32:54.918 [INFO][4565] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9fd525a1ee6f682edbb7b9822c9b836834b92746be75174aaf7625a6fef827de" Namespace="kube-system" Pod="coredns-5dd5756b68-pvrs8" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--pvrs8-eth0" Sep 4 17:32:55.084589 containerd[1805]: 2024-09-04 17:32:54.975 [INFO][4608] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9fd525a1ee6f682edbb7b9822c9b836834b92746be75174aaf7625a6fef827de" HandleID="k8s-pod-network.9fd525a1ee6f682edbb7b9822c9b836834b92746be75174aaf7625a6fef827de" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--pvrs8-eth0" Sep 4 17:32:55.084589 containerd[1805]: 2024-09-04 17:32:54.990 [INFO][4608] ipam_plugin.go 270: Auto assigning IP ContainerID="9fd525a1ee6f682edbb7b9822c9b836834b92746be75174aaf7625a6fef827de" HandleID="k8s-pod-network.9fd525a1ee6f682edbb7b9822c9b836834b92746be75174aaf7625a6fef827de" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--pvrs8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318780), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.2.1-a-eeaffe6a3f", "pod":"coredns-5dd5756b68-pvrs8", "timestamp":"2024-09-04 17:32:54.97529384 +0000 UTC"}, Hostname:"ci-3975.2.1-a-eeaffe6a3f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:32:55.084589 containerd[1805]: 2024-09-04 17:32:54.990 [INFO][4608] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:32:55.084589 containerd[1805]: 2024-09-04 17:32:54.990 [INFO][4608] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:32:55.084589 containerd[1805]: 2024-09-04 17:32:54.991 [INFO][4608] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-a-eeaffe6a3f' Sep 4 17:32:55.084589 containerd[1805]: 2024-09-04 17:32:54.994 [INFO][4608] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9fd525a1ee6f682edbb7b9822c9b836834b92746be75174aaf7625a6fef827de" host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:55.084589 containerd[1805]: 2024-09-04 17:32:54.999 [INFO][4608] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:55.084589 containerd[1805]: 2024-09-04 17:32:55.005 [INFO][4608] ipam.go 489: Trying affinity for 192.168.87.0/26 host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:55.084589 containerd[1805]: 2024-09-04 17:32:55.007 [INFO][4608] ipam.go 155: Attempting to load block cidr=192.168.87.0/26 host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:55.084589 containerd[1805]: 2024-09-04 17:32:55.012 [INFO][4608] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:55.084589 containerd[1805]: 2024-09-04 17:32:55.012 [INFO][4608] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.9fd525a1ee6f682edbb7b9822c9b836834b92746be75174aaf7625a6fef827de" host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:55.084589 containerd[1805]: 2024-09-04 17:32:55.018 [INFO][4608] ipam.go 1685: Creating new handle: k8s-pod-network.9fd525a1ee6f682edbb7b9822c9b836834b92746be75174aaf7625a6fef827de Sep 4 17:32:55.084589 containerd[1805]: 2024-09-04 17:32:55.021 [INFO][4608] ipam.go 1203: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.9fd525a1ee6f682edbb7b9822c9b836834b92746be75174aaf7625a6fef827de" host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:55.084589 containerd[1805]: 2024-09-04 17:32:55.025 [INFO][4608] ipam.go 1216: Successfully claimed IPs: [192.168.87.1/26] block=192.168.87.0/26 handle="k8s-pod-network.9fd525a1ee6f682edbb7b9822c9b836834b92746be75174aaf7625a6fef827de" host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:55.084589 containerd[1805]: 2024-09-04 17:32:55.025 [INFO][4608] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.1/26] handle="k8s-pod-network.9fd525a1ee6f682edbb7b9822c9b836834b92746be75174aaf7625a6fef827de" host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:55.084589 containerd[1805]: 2024-09-04 17:32:55.025 [INFO][4608] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:32:55.084589 containerd[1805]: 2024-09-04 17:32:55.025 [INFO][4608] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.87.1/26] IPv6=[] ContainerID="9fd525a1ee6f682edbb7b9822c9b836834b92746be75174aaf7625a6fef827de" HandleID="k8s-pod-network.9fd525a1ee6f682edbb7b9822c9b836834b92746be75174aaf7625a6fef827de" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--pvrs8-eth0" Sep 4 17:32:55.088759 containerd[1805]: 2024-09-04 17:32:55.027 [INFO][4565] k8s.go 386: Populated endpoint ContainerID="9fd525a1ee6f682edbb7b9822c9b836834b92746be75174aaf7625a6fef827de" Namespace="kube-system" Pod="coredns-5dd5756b68-pvrs8" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--pvrs8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--pvrs8-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"11b141d0-410e-4509-a6be-b9306c94a513", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-eeaffe6a3f", ContainerID:"", Pod:"coredns-5dd5756b68-pvrs8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali604632e5446", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:32:55.088759 containerd[1805]: 2024-09-04 17:32:55.027 [INFO][4565] k8s.go 387: Calico CNI using IPs: [192.168.87.1/32] ContainerID="9fd525a1ee6f682edbb7b9822c9b836834b92746be75174aaf7625a6fef827de" Namespace="kube-system" Pod="coredns-5dd5756b68-pvrs8" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--pvrs8-eth0" Sep 4 17:32:55.088759 containerd[1805]: 2024-09-04 17:32:55.027 [INFO][4565] dataplane_linux.go 68: Setting the host side veth name to cali604632e5446 ContainerID="9fd525a1ee6f682edbb7b9822c9b836834b92746be75174aaf7625a6fef827de" Namespace="kube-system" Pod="coredns-5dd5756b68-pvrs8" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--pvrs8-eth0" Sep 4 17:32:55.088759 containerd[1805]: 2024-09-04 17:32:55.056 [INFO][4565] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9fd525a1ee6f682edbb7b9822c9b836834b92746be75174aaf7625a6fef827de" Namespace="kube-system" Pod="coredns-5dd5756b68-pvrs8" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--pvrs8-eth0" Sep 4 17:32:55.088759 containerd[1805]: 2024-09-04 17:32:55.057 [INFO][4565] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9fd525a1ee6f682edbb7b9822c9b836834b92746be75174aaf7625a6fef827de" Namespace="kube-system" Pod="coredns-5dd5756b68-pvrs8" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--pvrs8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--pvrs8-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"11b141d0-410e-4509-a6be-b9306c94a513", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-eeaffe6a3f", ContainerID:"9fd525a1ee6f682edbb7b9822c9b836834b92746be75174aaf7625a6fef827de", Pod:"coredns-5dd5756b68-pvrs8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali604632e5446", MAC:"ae:db:2b:23:b7:d4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:32:55.088759 containerd[1805]: 2024-09-04 17:32:55.079 [INFO][4565] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9fd525a1ee6f682edbb7b9822c9b836834b92746be75174aaf7625a6fef827de" Namespace="kube-system" Pod="coredns-5dd5756b68-pvrs8" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--pvrs8-eth0" Sep 4 17:32:55.092356 containerd[1805]: 2024-09-04 17:32:54.919 [INFO][4574] utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 17:32:55.092356 containerd[1805]: 2024-09-04 17:32:54.933 [INFO][4574] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--a--eeaffe6a3f-k8s-calico--kube--controllers--5644997dc9--nplr7-eth0 calico-kube-controllers-5644997dc9- calico-system f0f33dc0-57aa-4818-afaf-a2d00c3723b4 695 0 2024-09-04 17:32:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5644997dc9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3975.2.1-a-eeaffe6a3f calico-kube-controllers-5644997dc9-nplr7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1a3e6f5c76e [] []}} ContainerID="9e577a64cd81640d60ba531b841fea89e31990b6e57963ed29c2c82758d2ae52" Namespace="calico-system" Pod="calico-kube-controllers-5644997dc9-nplr7" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--kube--controllers--5644997dc9--nplr7-" Sep 4 17:32:55.092356 containerd[1805]: 2024-09-04 17:32:54.933 [INFO][4574] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9e577a64cd81640d60ba531b841fea89e31990b6e57963ed29c2c82758d2ae52" Namespace="calico-system" Pod="calico-kube-controllers-5644997dc9-nplr7" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--kube--controllers--5644997dc9--nplr7-eth0" Sep 4 17:32:55.092356 containerd[1805]: 2024-09-04 17:32:55.006 [INFO][4612] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e577a64cd81640d60ba531b841fea89e31990b6e57963ed29c2c82758d2ae52" HandleID="k8s-pod-network.9e577a64cd81640d60ba531b841fea89e31990b6e57963ed29c2c82758d2ae52" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--kube--controllers--5644997dc9--nplr7-eth0" Sep 4 17:32:55.092356 containerd[1805]: 2024-09-04 17:32:55.021 [INFO][4612] ipam_plugin.go 270: Auto assigning IP ContainerID="9e577a64cd81640d60ba531b841fea89e31990b6e57963ed29c2c82758d2ae52" HandleID="k8s-pod-network.9e577a64cd81640d60ba531b841fea89e31990b6e57963ed29c2c82758d2ae52" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--kube--controllers--5644997dc9--nplr7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002eda80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.2.1-a-eeaffe6a3f", "pod":"calico-kube-controllers-5644997dc9-nplr7", "timestamp":"2024-09-04 17:32:55.006660948 +0000 UTC"}, Hostname:"ci-3975.2.1-a-eeaffe6a3f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:32:55.092356 containerd[1805]: 2024-09-04 17:32:55.021 [INFO][4612] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:32:55.092356 containerd[1805]: 2024-09-04 17:32:55.025 [INFO][4612] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:32:55.092356 containerd[1805]: 2024-09-04 17:32:55.025 [INFO][4612] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-a-eeaffe6a3f' Sep 4 17:32:55.092356 containerd[1805]: 2024-09-04 17:32:55.027 [INFO][4612] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9e577a64cd81640d60ba531b841fea89e31990b6e57963ed29c2c82758d2ae52" host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:55.092356 containerd[1805]: 2024-09-04 17:32:55.032 [INFO][4612] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:55.092356 containerd[1805]: 2024-09-04 17:32:55.035 [INFO][4612] ipam.go 489: Trying affinity for 192.168.87.0/26 host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:55.092356 containerd[1805]: 2024-09-04 17:32:55.037 [INFO][4612] ipam.go 155: Attempting to load block cidr=192.168.87.0/26 host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:55.092356 containerd[1805]: 2024-09-04 17:32:55.039 [INFO][4612] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:55.092356 containerd[1805]: 2024-09-04 17:32:55.039 [INFO][4612] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.9e577a64cd81640d60ba531b841fea89e31990b6e57963ed29c2c82758d2ae52" host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:55.092356 containerd[1805]: 2024-09-04 17:32:55.041 [INFO][4612] ipam.go 1685: Creating new handle: k8s-pod-network.9e577a64cd81640d60ba531b841fea89e31990b6e57963ed29c2c82758d2ae52 Sep 4 17:32:55.092356 containerd[1805]: 2024-09-04 17:32:55.044 [INFO][4612] ipam.go 1203: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.9e577a64cd81640d60ba531b841fea89e31990b6e57963ed29c2c82758d2ae52" host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:55.092356 containerd[1805]: 2024-09-04 17:32:55.050 [INFO][4612] ipam.go 1216: Successfully claimed IPs: [192.168.87.2/26] block=192.168.87.0/26 handle="k8s-pod-network.9e577a64cd81640d60ba531b841fea89e31990b6e57963ed29c2c82758d2ae52" host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:55.092356 containerd[1805]: 2024-09-04 17:32:55.050 [INFO][4612] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.2/26] handle="k8s-pod-network.9e577a64cd81640d60ba531b841fea89e31990b6e57963ed29c2c82758d2ae52" host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:55.092356 containerd[1805]: 2024-09-04 17:32:55.050 [INFO][4612] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:32:55.092356 containerd[1805]: 2024-09-04 17:32:55.050 [INFO][4612] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.87.2/26] IPv6=[] ContainerID="9e577a64cd81640d60ba531b841fea89e31990b6e57963ed29c2c82758d2ae52" HandleID="k8s-pod-network.9e577a64cd81640d60ba531b841fea89e31990b6e57963ed29c2c82758d2ae52" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--kube--controllers--5644997dc9--nplr7-eth0" Sep 4 17:32:55.093493 containerd[1805]: 2024-09-04 17:32:55.067 [INFO][4574] k8s.go 386: Populated endpoint ContainerID="9e577a64cd81640d60ba531b841fea89e31990b6e57963ed29c2c82758d2ae52" Namespace="calico-system" Pod="calico-kube-controllers-5644997dc9-nplr7" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--kube--controllers--5644997dc9--nplr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--eeaffe6a3f-k8s-calico--kube--controllers--5644997dc9--nplr7-eth0", GenerateName:"calico-kube-controllers-5644997dc9-", Namespace:"calico-system", SelfLink:"", UID:"f0f33dc0-57aa-4818-afaf-a2d00c3723b4", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5644997dc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-eeaffe6a3f", ContainerID:"", Pod:"calico-kube-controllers-5644997dc9-nplr7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1a3e6f5c76e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:32:55.093493 containerd[1805]: 2024-09-04 17:32:55.067 [INFO][4574] k8s.go 387: Calico CNI using IPs: [192.168.87.2/32] ContainerID="9e577a64cd81640d60ba531b841fea89e31990b6e57963ed29c2c82758d2ae52" Namespace="calico-system" Pod="calico-kube-controllers-5644997dc9-nplr7" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--kube--controllers--5644997dc9--nplr7-eth0" Sep 4 17:32:55.093493 containerd[1805]: 2024-09-04 17:32:55.068 [INFO][4574] dataplane_linux.go 68: Setting the host side veth name to cali1a3e6f5c76e ContainerID="9e577a64cd81640d60ba531b841fea89e31990b6e57963ed29c2c82758d2ae52" Namespace="calico-system" Pod="calico-kube-controllers-5644997dc9-nplr7" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--kube--controllers--5644997dc9--nplr7-eth0" Sep 4 17:32:55.093493 containerd[1805]: 2024-09-04 17:32:55.072 [INFO][4574] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9e577a64cd81640d60ba531b841fea89e31990b6e57963ed29c2c82758d2ae52" Namespace="calico-system" Pod="calico-kube-controllers-5644997dc9-nplr7" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--kube--controllers--5644997dc9--nplr7-eth0" Sep 4 17:32:55.093493 containerd[1805]: 2024-09-04 17:32:55.073 [INFO][4574] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9e577a64cd81640d60ba531b841fea89e31990b6e57963ed29c2c82758d2ae52" Namespace="calico-system" Pod="calico-kube-controllers-5644997dc9-nplr7" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--kube--controllers--5644997dc9--nplr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--eeaffe6a3f-k8s-calico--kube--controllers--5644997dc9--nplr7-eth0", GenerateName:"calico-kube-controllers-5644997dc9-", Namespace:"calico-system", SelfLink:"", UID:"f0f33dc0-57aa-4818-afaf-a2d00c3723b4", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5644997dc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-eeaffe6a3f", ContainerID:"9e577a64cd81640d60ba531b841fea89e31990b6e57963ed29c2c82758d2ae52", Pod:"calico-kube-controllers-5644997dc9-nplr7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1a3e6f5c76e", MAC:"82:58:17:7a:db:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:32:55.093493 containerd[1805]: 2024-09-04 17:32:55.090 [INFO][4574] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9e577a64cd81640d60ba531b841fea89e31990b6e57963ed29c2c82758d2ae52" Namespace="calico-system" Pod="calico-kube-controllers-5644997dc9-nplr7" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--kube--controllers--5644997dc9--nplr7-eth0" Sep 4 17:32:55.144885 containerd[1805]: time="2024-09-04T17:32:55.143807282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:32:55.144885 containerd[1805]: time="2024-09-04T17:32:55.144326182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:55.147412 containerd[1805]: time="2024-09-04T17:32:55.146865782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:32:55.147412 containerd[1805]: time="2024-09-04T17:32:55.147052682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:55.149471 containerd[1805]: time="2024-09-04T17:32:55.147017682Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:32:55.149471 containerd[1805]: time="2024-09-04T17:32:55.148105183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:55.149471 containerd[1805]: time="2024-09-04T17:32:55.148938083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:32:55.149471 containerd[1805]: time="2024-09-04T17:32:55.148975383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:55.255853 containerd[1805]: time="2024-09-04T17:32:55.255484909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-pvrs8,Uid:11b141d0-410e-4509-a6be-b9306c94a513,Namespace:kube-system,Attempt:1,} returns sandbox id \"9fd525a1ee6f682edbb7b9822c9b836834b92746be75174aaf7625a6fef827de\"" Sep 4 17:32:55.259691 containerd[1805]: time="2024-09-04T17:32:55.259375910Z" level=info msg="CreateContainer within sandbox \"9fd525a1ee6f682edbb7b9822c9b836834b92746be75174aaf7625a6fef827de\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:32:55.330329 containerd[1805]: time="2024-09-04T17:32:55.329901728Z" level=info msg="CreateContainer within sandbox \"9fd525a1ee6f682edbb7b9822c9b836834b92746be75174aaf7625a6fef827de\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"97c7a67975dd460054cc58178c102251f6bbc0caf47aaad3b395341fe9c9f727\"" Sep 4 17:32:55.334119 containerd[1805]: time="2024-09-04T17:32:55.331548628Z" level=info msg="StartContainer for \"97c7a67975dd460054cc58178c102251f6bbc0caf47aaad3b395341fe9c9f727\"" Sep 4 17:32:55.360522 containerd[1805]: time="2024-09-04T17:32:55.360424335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5644997dc9-nplr7,Uid:f0f33dc0-57aa-4818-afaf-a2d00c3723b4,Namespace:calico-system,Attempt:1,} returns sandbox id \"9e577a64cd81640d60ba531b841fea89e31990b6e57963ed29c2c82758d2ae52\"" Sep 4 17:32:55.364340 containerd[1805]: time="2024-09-04T17:32:55.363481036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 17:32:55.474969 containerd[1805]: time="2024-09-04T17:32:55.474919664Z" level=info msg="StartContainer for \"97c7a67975dd460054cc58178c102251f6bbc0caf47aaad3b395341fe9c9f727\" returns successfully" Sep 4 17:32:55.825280 kernel: bpftool[4877]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 17:32:55.866010 kubelet[3484]: I0904 17:32:55.864969 3484 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-pvrs8" podStartSLOduration=40.864923161 podCreationTimestamp="2024-09-04 17:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:32:55.86390426 +0000 UTC m=+50.303226516" watchObservedRunningTime="2024-09-04 17:32:55.864923161 +0000 UTC m=+50.304245317" Sep 4 17:32:56.367835 systemd-networkd[1373]: vxlan.calico: Link UP Sep 4 17:32:56.367843 systemd-networkd[1373]: vxlan.calico: Gained carrier Sep 4 17:32:56.673375 systemd-networkd[1373]: cali604632e5446: Gained IPv6LL Sep 4 17:32:56.800348 systemd-networkd[1373]: cali1a3e6f5c76e: Gained IPv6LL Sep 4 17:32:57.716509 containerd[1805]: time="2024-09-04T17:32:57.716462620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:57.718384 containerd[1805]: time="2024-09-04T17:32:57.718320121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Sep 4 17:32:57.721851 containerd[1805]: time="2024-09-04T17:32:57.721800622Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:57.726413 containerd[1805]: time="2024-09-04T17:32:57.726221223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:57.727340 containerd[1805]: time="2024-09-04T17:32:57.727184523Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 2.363662587s" Sep 4 17:32:57.727340 containerd[1805]: time="2024-09-04T17:32:57.727224323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Sep 4 17:32:57.750383 containerd[1805]: time="2024-09-04T17:32:57.749575729Z" level=info msg="CreateContainer within sandbox \"9e577a64cd81640d60ba531b841fea89e31990b6e57963ed29c2c82758d2ae52\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 17:32:57.788393 containerd[1805]: time="2024-09-04T17:32:57.788270938Z" level=info msg="CreateContainer within sandbox \"9e577a64cd81640d60ba531b841fea89e31990b6e57963ed29c2c82758d2ae52\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c0e9a909b1786823607efe3a04ba650417428965e75b6cda196dc11b14362259\"" Sep 4 17:32:57.789482 containerd[1805]: time="2024-09-04T17:32:57.789028638Z" level=info msg="StartContainer for \"c0e9a909b1786823607efe3a04ba650417428965e75b6cda196dc11b14362259\"" Sep 4 17:32:57.824483 systemd-networkd[1373]: vxlan.calico: Gained IPv6LL Sep 4 17:32:57.893527 containerd[1805]: time="2024-09-04T17:32:57.893481764Z" level=info msg="StartContainer for \"c0e9a909b1786823607efe3a04ba650417428965e75b6cda196dc11b14362259\" returns successfully" Sep 4 17:32:58.936217 kubelet[3484]: I0904 17:32:58.936176 3484 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5644997dc9-nplr7" podStartSLOduration=34.571218112 podCreationTimestamp="2024-09-04 17:32:22 +0000 UTC" firstStartedPulling="2024-09-04 17:32:55.362698636 +0000 UTC m=+49.802020792" lastFinishedPulling="2024-09-04 17:32:57.727610123 +0000 UTC m=+52.166932379" observedRunningTime="2024-09-04 17:32:58.887691148 +0000 UTC m=+53.327013304" watchObservedRunningTime="2024-09-04 17:32:58.936129699 +0000 UTC m=+53.375451855" Sep 4 17:32:59.660501 containerd[1805]: time="2024-09-04T17:32:59.660008140Z" level=info msg="StopPodSandbox for \"a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1\"" Sep 4 17:32:59.663582 containerd[1805]: time="2024-09-04T17:32:59.663203870Z" level=info msg="StopPodSandbox for \"86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709\"" Sep 4 17:32:59.767252 containerd[1805]: 2024-09-04 17:32:59.724 [INFO][5045] k8s.go 608: Cleaning up netns ContainerID="86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" Sep 4 17:32:59.767252 containerd[1805]: 2024-09-04 17:32:59.724 [INFO][5045] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" iface="eth0" netns="/var/run/netns/cni-45e7c437-fa6a-a2a9-60dc-8ffdedc3ba24" Sep 4 17:32:59.767252 containerd[1805]: 2024-09-04 17:32:59.725 [INFO][5045] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" iface="eth0" netns="/var/run/netns/cni-45e7c437-fa6a-a2a9-60dc-8ffdedc3ba24" Sep 4 17:32:59.767252 containerd[1805]: 2024-09-04 17:32:59.726 [INFO][5045] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" iface="eth0" netns="/var/run/netns/cni-45e7c437-fa6a-a2a9-60dc-8ffdedc3ba24" Sep 4 17:32:59.767252 containerd[1805]: 2024-09-04 17:32:59.727 [INFO][5045] k8s.go 615: Releasing IP address(es) ContainerID="86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" Sep 4 17:32:59.767252 containerd[1805]: 2024-09-04 17:32:59.727 [INFO][5045] utils.go 188: Calico CNI releasing IP address ContainerID="86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" Sep 4 17:32:59.767252 containerd[1805]: 2024-09-04 17:32:59.755 [INFO][5058] ipam_plugin.go 417: Releasing address using handleID ContainerID="86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" HandleID="k8s-pod-network.86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-csi--node--driver--d9dpf-eth0" Sep 4 17:32:59.767252 containerd[1805]: 2024-09-04 17:32:59.755 [INFO][5058] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:32:59.767252 containerd[1805]: 2024-09-04 17:32:59.756 [INFO][5058] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:32:59.767252 containerd[1805]: 2024-09-04 17:32:59.760 [WARNING][5058] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" HandleID="k8s-pod-network.86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-csi--node--driver--d9dpf-eth0" Sep 4 17:32:59.767252 containerd[1805]: 2024-09-04 17:32:59.760 [INFO][5058] ipam_plugin.go 445: Releasing address using workloadID ContainerID="86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" HandleID="k8s-pod-network.86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-csi--node--driver--d9dpf-eth0" Sep 4 17:32:59.767252 containerd[1805]: 2024-09-04 17:32:59.762 [INFO][5058] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:32:59.767252 containerd[1805]: 2024-09-04 17:32:59.763 [INFO][5045] k8s.go 621: Teardown processing complete. ContainerID="86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" Sep 4 17:32:59.767252 containerd[1805]: time="2024-09-04T17:32:59.764890617Z" level=info msg="TearDown network for sandbox \"86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709\" successfully" Sep 4 17:32:59.767252 containerd[1805]: time="2024-09-04T17:32:59.764936517Z" level=info msg="StopPodSandbox for \"86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709\" returns successfully" Sep 4 17:32:59.771199 containerd[1805]: time="2024-09-04T17:32:59.770657270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d9dpf,Uid:0887fe36-7732-4bbf-b901-deca836854e8,Namespace:calico-system,Attempt:1,}" Sep 4 17:32:59.772767 systemd[1]: run-netns-cni\x2d45e7c437\x2dfa6a\x2da2a9\x2d60dc\x2d8ffdedc3ba24.mount: Deactivated successfully. Sep 4 17:32:59.777089 containerd[1805]: 2024-09-04 17:32:59.725 [INFO][5041] k8s.go 608: Cleaning up netns ContainerID="a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" Sep 4 17:32:59.777089 containerd[1805]: 2024-09-04 17:32:59.726 [INFO][5041] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" iface="eth0" netns="/var/run/netns/cni-72f5931d-c53e-9605-8cd4-e7bb917b45b1" Sep 4 17:32:59.777089 containerd[1805]: 2024-09-04 17:32:59.726 [INFO][5041] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" iface="eth0" netns="/var/run/netns/cni-72f5931d-c53e-9605-8cd4-e7bb917b45b1" Sep 4 17:32:59.777089 containerd[1805]: 2024-09-04 17:32:59.726 [INFO][5041] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" iface="eth0" netns="/var/run/netns/cni-72f5931d-c53e-9605-8cd4-e7bb917b45b1" Sep 4 17:32:59.777089 containerd[1805]: 2024-09-04 17:32:59.726 [INFO][5041] k8s.go 615: Releasing IP address(es) ContainerID="a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" Sep 4 17:32:59.777089 containerd[1805]: 2024-09-04 17:32:59.726 [INFO][5041] utils.go 188: Calico CNI releasing IP address ContainerID="a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" Sep 4 17:32:59.777089 containerd[1805]: 2024-09-04 17:32:59.755 [INFO][5057] ipam_plugin.go 417: Releasing address using handleID ContainerID="a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" HandleID="k8s-pod-network.a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--fh77d-eth0" Sep 4 17:32:59.777089 containerd[1805]: 2024-09-04 17:32:59.755 [INFO][5057] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:32:59.777089 containerd[1805]: 2024-09-04 17:32:59.762 [INFO][5057] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:32:59.777089 containerd[1805]: 2024-09-04 17:32:59.772 [WARNING][5057] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" HandleID="k8s-pod-network.a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--fh77d-eth0" Sep 4 17:32:59.777089 containerd[1805]: 2024-09-04 17:32:59.772 [INFO][5057] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" HandleID="k8s-pod-network.a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--fh77d-eth0" Sep 4 17:32:59.777089 containerd[1805]: 2024-09-04 17:32:59.774 [INFO][5057] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:32:59.777089 containerd[1805]: 2024-09-04 17:32:59.776 [INFO][5041] k8s.go 621: Teardown processing complete. ContainerID="a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" Sep 4 17:32:59.777767 containerd[1805]: time="2024-09-04T17:32:59.777201131Z" level=info msg="TearDown network for sandbox \"a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1\" successfully" Sep 4 17:32:59.777767 containerd[1805]: time="2024-09-04T17:32:59.777224632Z" level=info msg="StopPodSandbox for \"a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1\" returns successfully" Sep 4 17:32:59.777767 containerd[1805]: time="2024-09-04T17:32:59.777747236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fh77d,Uid:06d0bcfd-0047-41da-912c-7db4d68edcca,Namespace:kube-system,Attempt:1,}" Sep 4 17:32:59.782655 systemd[1]: run-netns-cni\x2d72f5931d\x2dc53e\x2d9605\x2d8cd4\x2de7bb917b45b1.mount: Deactivated successfully. Sep 4 17:32:59.962695 systemd-networkd[1373]: cali2560bf120a0: Link UP Sep 4 17:32:59.962973 systemd-networkd[1373]: cali2560bf120a0: Gained carrier Sep 4 17:32:59.987687 containerd[1805]: 2024-09-04 17:32:59.872 [INFO][5069] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--a--eeaffe6a3f-k8s-csi--node--driver--d9dpf-eth0 csi-node-driver- calico-system 0887fe36-7732-4bbf-b901-deca836854e8 741 0 2024-09-04 17:32:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3975.2.1-a-eeaffe6a3f csi-node-driver-d9dpf eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali2560bf120a0 [] []}} ContainerID="e01f0e6225097ccb8e5f2b37ee184fcc0390f3105a5aefd318d30147aabc1f51" Namespace="calico-system" Pod="csi-node-driver-d9dpf" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-csi--node--driver--d9dpf-" Sep 4 17:32:59.987687 containerd[1805]: 2024-09-04 17:32:59.872 [INFO][5069] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e01f0e6225097ccb8e5f2b37ee184fcc0390f3105a5aefd318d30147aabc1f51" Namespace="calico-system" Pod="csi-node-driver-d9dpf" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-csi--node--driver--d9dpf-eth0" Sep 4 17:32:59.987687 containerd[1805]: 2024-09-04 17:32:59.919 [INFO][5094] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e01f0e6225097ccb8e5f2b37ee184fcc0390f3105a5aefd318d30147aabc1f51" HandleID="k8s-pod-network.e01f0e6225097ccb8e5f2b37ee184fcc0390f3105a5aefd318d30147aabc1f51" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-csi--node--driver--d9dpf-eth0" Sep 4 17:32:59.987687 containerd[1805]: 2024-09-04 17:32:59.932 [INFO][5094] ipam_plugin.go 270: Auto assigning IP ContainerID="e01f0e6225097ccb8e5f2b37ee184fcc0390f3105a5aefd318d30147aabc1f51" HandleID="k8s-pod-network.e01f0e6225097ccb8e5f2b37ee184fcc0390f3105a5aefd318d30147aabc1f51" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-csi--node--driver--d9dpf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000310b00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.2.1-a-eeaffe6a3f", "pod":"csi-node-driver-d9dpf", "timestamp":"2024-09-04 17:32:59.919247754 +0000 UTC"}, Hostname:"ci-3975.2.1-a-eeaffe6a3f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:32:59.987687 containerd[1805]: 2024-09-04 17:32:59.932 [INFO][5094] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:32:59.987687 containerd[1805]: 2024-09-04 17:32:59.932 [INFO][5094] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:32:59.987687 containerd[1805]: 2024-09-04 17:32:59.932 [INFO][5094] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-a-eeaffe6a3f' Sep 4 17:32:59.987687 containerd[1805]: 2024-09-04 17:32:59.933 [INFO][5094] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e01f0e6225097ccb8e5f2b37ee184fcc0390f3105a5aefd318d30147aabc1f51" host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:59.987687 containerd[1805]: 2024-09-04 17:32:59.938 [INFO][5094] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:59.987687 containerd[1805]: 2024-09-04 17:32:59.941 [INFO][5094] ipam.go 489: Trying affinity for 192.168.87.0/26 host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:59.987687 containerd[1805]: 2024-09-04 17:32:59.943 [INFO][5094] ipam.go 155: Attempting to load block cidr=192.168.87.0/26 host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:59.987687 containerd[1805]: 2024-09-04 17:32:59.944 [INFO][5094] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:59.987687 containerd[1805]: 2024-09-04 17:32:59.944 [INFO][5094] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.e01f0e6225097ccb8e5f2b37ee184fcc0390f3105a5aefd318d30147aabc1f51" host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:59.987687 containerd[1805]: 2024-09-04 17:32:59.946 [INFO][5094] ipam.go 1685: Creating new handle: k8s-pod-network.e01f0e6225097ccb8e5f2b37ee184fcc0390f3105a5aefd318d30147aabc1f51 Sep 4 17:32:59.987687 containerd[1805]: 2024-09-04 17:32:59.949 [INFO][5094] ipam.go 1203: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.e01f0e6225097ccb8e5f2b37ee184fcc0390f3105a5aefd318d30147aabc1f51" host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:59.987687 containerd[1805]: 2024-09-04 17:32:59.953 [INFO][5094] ipam.go 1216: Successfully claimed IPs: [192.168.87.3/26] block=192.168.87.0/26 handle="k8s-pod-network.e01f0e6225097ccb8e5f2b37ee184fcc0390f3105a5aefd318d30147aabc1f51" host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:59.987687 containerd[1805]: 2024-09-04 17:32:59.953 [INFO][5094] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.3/26] handle="k8s-pod-network.e01f0e6225097ccb8e5f2b37ee184fcc0390f3105a5aefd318d30147aabc1f51" host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:32:59.987687 containerd[1805]: 2024-09-04 17:32:59.953 [INFO][5094] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:32:59.987687 containerd[1805]: 2024-09-04 17:32:59.953 [INFO][5094] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.87.3/26] IPv6=[] ContainerID="e01f0e6225097ccb8e5f2b37ee184fcc0390f3105a5aefd318d30147aabc1f51" HandleID="k8s-pod-network.e01f0e6225097ccb8e5f2b37ee184fcc0390f3105a5aefd318d30147aabc1f51" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-csi--node--driver--d9dpf-eth0" Sep 4 17:32:59.990146 containerd[1805]: 2024-09-04 17:32:59.958 [INFO][5069] k8s.go 386: Populated endpoint ContainerID="e01f0e6225097ccb8e5f2b37ee184fcc0390f3105a5aefd318d30147aabc1f51" Namespace="calico-system" Pod="csi-node-driver-d9dpf" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-csi--node--driver--d9dpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--eeaffe6a3f-k8s-csi--node--driver--d9dpf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0887fe36-7732-4bbf-b901-deca836854e8", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-eeaffe6a3f", ContainerID:"", Pod:"csi-node-driver-d9dpf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.87.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali2560bf120a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:32:59.990146 containerd[1805]: 2024-09-04 17:32:59.958 [INFO][5069] k8s.go 387: Calico CNI using IPs: [192.168.87.3/32] ContainerID="e01f0e6225097ccb8e5f2b37ee184fcc0390f3105a5aefd318d30147aabc1f51" Namespace="calico-system" Pod="csi-node-driver-d9dpf" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-csi--node--driver--d9dpf-eth0" Sep 4 17:32:59.990146 containerd[1805]: 2024-09-04 17:32:59.958 [INFO][5069] dataplane_linux.go 68: Setting the host side veth name to cali2560bf120a0 ContainerID="e01f0e6225097ccb8e5f2b37ee184fcc0390f3105a5aefd318d30147aabc1f51" Namespace="calico-system" Pod="csi-node-driver-d9dpf" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-csi--node--driver--d9dpf-eth0" Sep 4 17:32:59.990146 containerd[1805]: 2024-09-04 17:32:59.962 [INFO][5069] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e01f0e6225097ccb8e5f2b37ee184fcc0390f3105a5aefd318d30147aabc1f51" Namespace="calico-system" Pod="csi-node-driver-d9dpf" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-csi--node--driver--d9dpf-eth0" Sep 4 17:32:59.990146 containerd[1805]: 2024-09-04 17:32:59.965 [INFO][5069] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e01f0e6225097ccb8e5f2b37ee184fcc0390f3105a5aefd318d30147aabc1f51" Namespace="calico-system" Pod="csi-node-driver-d9dpf" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-csi--node--driver--d9dpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--eeaffe6a3f-k8s-csi--node--driver--d9dpf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0887fe36-7732-4bbf-b901-deca836854e8", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-eeaffe6a3f", ContainerID:"e01f0e6225097ccb8e5f2b37ee184fcc0390f3105a5aefd318d30147aabc1f51", Pod:"csi-node-driver-d9dpf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.87.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali2560bf120a0", MAC:"3e:be:b5:ee:12:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:32:59.990146 containerd[1805]: 2024-09-04 17:32:59.981 [INFO][5069] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e01f0e6225097ccb8e5f2b37ee184fcc0390f3105a5aefd318d30147aabc1f51" Namespace="calico-system" Pod="csi-node-driver-d9dpf" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-csi--node--driver--d9dpf-eth0" Sep 4 17:33:00.007059 systemd-networkd[1373]: cali28dd5d7b789: Link UP Sep 4 17:33:00.008630 systemd-networkd[1373]: cali28dd5d7b789: Gained carrier Sep 4 17:33:00.031897 containerd[1805]: 2024-09-04 17:32:59.881 [INFO][5073] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--fh77d-eth0 coredns-5dd5756b68- kube-system 06d0bcfd-0047-41da-912c-7db4d68edcca 742 0 2024-09-04 17:32:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.2.1-a-eeaffe6a3f coredns-5dd5756b68-fh77d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali28dd5d7b789 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="981b03683cda17cba6fe4b75cafe5c554117c8781512c7c0c9ca87cca0e7f027" Namespace="kube-system" Pod="coredns-5dd5756b68-fh77d" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--fh77d-" Sep 4 17:33:00.031897 containerd[1805]: 2024-09-04 17:32:59.881 [INFO][5073] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="981b03683cda17cba6fe4b75cafe5c554117c8781512c7c0c9ca87cca0e7f027" Namespace="kube-system" Pod="coredns-5dd5756b68-fh77d" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--fh77d-eth0" Sep 4 17:33:00.031897 containerd[1805]: 2024-09-04 17:32:59.927 [INFO][5097] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="981b03683cda17cba6fe4b75cafe5c554117c8781512c7c0c9ca87cca0e7f027" HandleID="k8s-pod-network.981b03683cda17cba6fe4b75cafe5c554117c8781512c7c0c9ca87cca0e7f027" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--fh77d-eth0" Sep 4 17:33:00.031897 containerd[1805]: 2024-09-04 17:32:59.935 [INFO][5097] ipam_plugin.go 270: Auto assigning IP ContainerID="981b03683cda17cba6fe4b75cafe5c554117c8781512c7c0c9ca87cca0e7f027" HandleID="k8s-pod-network.981b03683cda17cba6fe4b75cafe5c554117c8781512c7c0c9ca87cca0e7f027" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--fh77d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000340790), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.2.1-a-eeaffe6a3f", "pod":"coredns-5dd5756b68-fh77d", "timestamp":"2024-09-04 17:32:59.927453331 +0000 UTC"}, Hostname:"ci-3975.2.1-a-eeaffe6a3f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:33:00.031897 containerd[1805]: 2024-09-04 17:32:59.935 [INFO][5097] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:00.031897 containerd[1805]: 2024-09-04 17:32:59.954 [INFO][5097] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:00.031897 containerd[1805]: 2024-09-04 17:32:59.954 [INFO][5097] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-a-eeaffe6a3f' Sep 4 17:33:00.031897 containerd[1805]: 2024-09-04 17:32:59.956 [INFO][5097] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.981b03683cda17cba6fe4b75cafe5c554117c8781512c7c0c9ca87cca0e7f027" host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:33:00.031897 containerd[1805]: 2024-09-04 17:32:59.965 [INFO][5097] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:33:00.031897 containerd[1805]: 2024-09-04 17:32:59.970 [INFO][5097] ipam.go 489: Trying affinity for 192.168.87.0/26 host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:33:00.031897 containerd[1805]: 2024-09-04 17:32:59.974 [INFO][5097] ipam.go 155: Attempting to load block cidr=192.168.87.0/26 host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:33:00.031897 containerd[1805]: 2024-09-04 17:32:59.979 [INFO][5097] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:33:00.031897 containerd[1805]: 2024-09-04 17:32:59.979 [INFO][5097] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.981b03683cda17cba6fe4b75cafe5c554117c8781512c7c0c9ca87cca0e7f027" host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:33:00.031897 containerd[1805]: 2024-09-04 17:32:59.983 [INFO][5097] ipam.go 1685: Creating new handle: k8s-pod-network.981b03683cda17cba6fe4b75cafe5c554117c8781512c7c0c9ca87cca0e7f027 Sep 4 17:33:00.031897 containerd[1805]: 2024-09-04 17:32:59.987 [INFO][5097] ipam.go 1203: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.981b03683cda17cba6fe4b75cafe5c554117c8781512c7c0c9ca87cca0e7f027" host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:33:00.031897 containerd[1805]: 2024-09-04 17:32:59.995 [INFO][5097] ipam.go 1216: Successfully claimed IPs: [192.168.87.4/26] block=192.168.87.0/26 handle="k8s-pod-network.981b03683cda17cba6fe4b75cafe5c554117c8781512c7c0c9ca87cca0e7f027" host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:33:00.031897 containerd[1805]: 2024-09-04 17:32:59.996 [INFO][5097] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.4/26] handle="k8s-pod-network.981b03683cda17cba6fe4b75cafe5c554117c8781512c7c0c9ca87cca0e7f027" host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:33:00.031897 containerd[1805]: 2024-09-04 17:32:59.997 [INFO][5097] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:00.031897 containerd[1805]: 2024-09-04 17:32:59.997 [INFO][5097] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.87.4/26] IPv6=[] ContainerID="981b03683cda17cba6fe4b75cafe5c554117c8781512c7c0c9ca87cca0e7f027" HandleID="k8s-pod-network.981b03683cda17cba6fe4b75cafe5c554117c8781512c7c0c9ca87cca0e7f027" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--fh77d-eth0" Sep 4 17:33:00.033802 containerd[1805]: 2024-09-04 17:33:00.001 [INFO][5073] k8s.go 386: Populated endpoint ContainerID="981b03683cda17cba6fe4b75cafe5c554117c8781512c7c0c9ca87cca0e7f027" Namespace="kube-system" Pod="coredns-5dd5756b68-fh77d" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--fh77d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--fh77d-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"06d0bcfd-0047-41da-912c-7db4d68edcca", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-eeaffe6a3f", ContainerID:"", Pod:"coredns-5dd5756b68-fh77d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28dd5d7b789", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:00.033802 containerd[1805]: 2024-09-04 17:33:00.001 [INFO][5073] k8s.go 387: Calico CNI using IPs: [192.168.87.4/32] ContainerID="981b03683cda17cba6fe4b75cafe5c554117c8781512c7c0c9ca87cca0e7f027" Namespace="kube-system" Pod="coredns-5dd5756b68-fh77d" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--fh77d-eth0" Sep 4 17:33:00.033802 containerd[1805]: 2024-09-04 17:33:00.001 [INFO][5073] dataplane_linux.go 68: Setting the host side veth name to cali28dd5d7b789 ContainerID="981b03683cda17cba6fe4b75cafe5c554117c8781512c7c0c9ca87cca0e7f027" Namespace="kube-system" Pod="coredns-5dd5756b68-fh77d" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--fh77d-eth0" Sep 4 17:33:00.033802 containerd[1805]: 2024-09-04 17:33:00.008 [INFO][5073] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="981b03683cda17cba6fe4b75cafe5c554117c8781512c7c0c9ca87cca0e7f027" Namespace="kube-system" Pod="coredns-5dd5756b68-fh77d" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--fh77d-eth0" Sep 4 17:33:00.033802 containerd[1805]: 2024-09-04 17:33:00.011 [INFO][5073] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="981b03683cda17cba6fe4b75cafe5c554117c8781512c7c0c9ca87cca0e7f027" Namespace="kube-system" Pod="coredns-5dd5756b68-fh77d" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--fh77d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--fh77d-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"06d0bcfd-0047-41da-912c-7db4d68edcca", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-eeaffe6a3f", ContainerID:"981b03683cda17cba6fe4b75cafe5c554117c8781512c7c0c9ca87cca0e7f027", Pod:"coredns-5dd5756b68-fh77d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28dd5d7b789", MAC:"52:cf:51:2a:0b:7e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:00.033802 containerd[1805]: 2024-09-04 17:33:00.028 [INFO][5073] k8s.go 500: Wrote updated endpoint to datastore ContainerID="981b03683cda17cba6fe4b75cafe5c554117c8781512c7c0c9ca87cca0e7f027" Namespace="kube-system" Pod="coredns-5dd5756b68-fh77d" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--fh77d-eth0" Sep 4 17:33:00.063279 containerd[1805]: time="2024-09-04T17:33:00.063177595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:33:00.063279 containerd[1805]: time="2024-09-04T17:33:00.063259295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:00.063550 containerd[1805]: time="2024-09-04T17:33:00.063507998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:33:00.063793 containerd[1805]: time="2024-09-04T17:33:00.063540998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:00.097902 containerd[1805]: time="2024-09-04T17:33:00.097595515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:33:00.097902 containerd[1805]: time="2024-09-04T17:33:00.097670016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:00.097902 containerd[1805]: time="2024-09-04T17:33:00.097705716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:33:00.097902 containerd[1805]: time="2024-09-04T17:33:00.097727016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:00.145442 containerd[1805]: time="2024-09-04T17:33:00.145394660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d9dpf,Uid:0887fe36-7732-4bbf-b901-deca836854e8,Namespace:calico-system,Attempt:1,} returns sandbox id \"e01f0e6225097ccb8e5f2b37ee184fcc0390f3105a5aefd318d30147aabc1f51\"" Sep 4 17:33:00.151341 containerd[1805]: time="2024-09-04T17:33:00.151196114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 17:33:00.183837 containerd[1805]: time="2024-09-04T17:33:00.183804518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fh77d,Uid:06d0bcfd-0047-41da-912c-7db4d68edcca,Namespace:kube-system,Attempt:1,} returns sandbox id \"981b03683cda17cba6fe4b75cafe5c554117c8781512c7c0c9ca87cca0e7f027\"" Sep 4 17:33:00.186888 containerd[1805]: time="2024-09-04T17:33:00.186850246Z" level=info msg="CreateContainer within sandbox \"981b03683cda17cba6fe4b75cafe5c554117c8781512c7c0c9ca87cca0e7f027\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:33:00.219061 containerd[1805]: time="2024-09-04T17:33:00.218969245Z" level=info msg="CreateContainer within sandbox \"981b03683cda17cba6fe4b75cafe5c554117c8781512c7c0c9ca87cca0e7f027\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8d45a5bb61fd5ca5d323ad3d68d856aa9a129dd96e6d399f278dbf931b913cde\"" Sep 4 17:33:00.221091 containerd[1805]: time="2024-09-04T17:33:00.219557251Z" level=info msg="StartContainer for \"8d45a5bb61fd5ca5d323ad3d68d856aa9a129dd96e6d399f278dbf931b913cde\"" Sep 4 17:33:00.268279 containerd[1805]: time="2024-09-04T17:33:00.267867101Z" level=info msg="StartContainer for \"8d45a5bb61fd5ca5d323ad3d68d856aa9a129dd96e6d399f278dbf931b913cde\" returns successfully" Sep 4 17:33:00.894223 kubelet[3484]: I0904 17:33:00.894186 3484 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-fh77d" podStartSLOduration=45.894131533 podCreationTimestamp="2024-09-04 17:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:33:00.893488627 +0000 UTC m=+55.332810783" watchObservedRunningTime="2024-09-04 17:33:00.894131533 +0000 UTC m=+55.333453789" Sep 4 17:33:01.408412 systemd-networkd[1373]: cali2560bf120a0: Gained IPv6LL Sep 4 17:33:01.416917 containerd[1805]: time="2024-09-04T17:33:01.416876601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:01.418903 containerd[1805]: time="2024-09-04T17:33:01.418843819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Sep 4 17:33:01.421936 containerd[1805]: time="2024-09-04T17:33:01.421858747Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:01.426131 containerd[1805]: time="2024-09-04T17:33:01.426041086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:01.427708 containerd[1805]: time="2024-09-04T17:33:01.427193897Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.274916773s" Sep 4 17:33:01.427708 containerd[1805]: time="2024-09-04T17:33:01.427248398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Sep 4 17:33:01.429325 containerd[1805]: time="2024-09-04T17:33:01.428990114Z" level=info msg="CreateContainer within sandbox \"e01f0e6225097ccb8e5f2b37ee184fcc0390f3105a5aefd318d30147aabc1f51\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 17:33:01.472039 containerd[1805]: time="2024-09-04T17:33:01.472006314Z" level=info msg="CreateContainer within sandbox \"e01f0e6225097ccb8e5f2b37ee184fcc0390f3105a5aefd318d30147aabc1f51\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2d6c280ec4083111f406aeb257a16920c5037de588090ae261abb0df357bda3b\"" Sep 4 17:33:01.473702 containerd[1805]: time="2024-09-04T17:33:01.472456919Z" level=info msg="StartContainer for \"2d6c280ec4083111f406aeb257a16920c5037de588090ae261abb0df357bda3b\"" Sep 4 17:33:01.534028 containerd[1805]: time="2024-09-04T17:33:01.533962191Z" level=info msg="StartContainer for \"2d6c280ec4083111f406aeb257a16920c5037de588090ae261abb0df357bda3b\" returns successfully" Sep 4 17:33:01.534977 containerd[1805]: time="2024-09-04T17:33:01.534945901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 17:33:01.664456 systemd-networkd[1373]: cali28dd5d7b789: Gained IPv6LL Sep 4 17:33:02.929744 containerd[1805]: time="2024-09-04T17:33:02.929643799Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:02.932983 containerd[1805]: time="2024-09-04T17:33:02.932919430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Sep 4 17:33:02.937584 containerd[1805]: time="2024-09-04T17:33:02.937442373Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:02.944696 containerd[1805]: time="2024-09-04T17:33:02.944329937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:02.945176 containerd[1805]: time="2024-09-04T17:33:02.945003244Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.410016043s" Sep 4 17:33:02.945176 containerd[1805]: time="2024-09-04T17:33:02.945040644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Sep 4 17:33:02.948292 containerd[1805]: time="2024-09-04T17:33:02.948180974Z" level=info msg="CreateContainer within sandbox \"e01f0e6225097ccb8e5f2b37ee184fcc0390f3105a5aefd318d30147aabc1f51\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 17:33:03.011724 containerd[1805]: time="2024-09-04T17:33:03.011686870Z" level=info msg="CreateContainer within sandbox \"e01f0e6225097ccb8e5f2b37ee184fcc0390f3105a5aefd318d30147aabc1f51\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bd6fff499ce80d047136acf8cfe6ddedbf5a9c3a1c6c87715a10a5a3a07e8fe9\"" Sep 4 17:33:03.013443 containerd[1805]: time="2024-09-04T17:33:03.012169674Z" level=info msg="StartContainer for \"bd6fff499ce80d047136acf8cfe6ddedbf5a9c3a1c6c87715a10a5a3a07e8fe9\"" Sep 4 17:33:03.054161 systemd[1]: run-containerd-runc-k8s.io-bd6fff499ce80d047136acf8cfe6ddedbf5a9c3a1c6c87715a10a5a3a07e8fe9-runc.GTipet.mount: Deactivated successfully. Sep 4 17:33:03.090796 containerd[1805]: time="2024-09-04T17:33:03.090758412Z" level=info msg="StartContainer for \"bd6fff499ce80d047136acf8cfe6ddedbf5a9c3a1c6c87715a10a5a3a07e8fe9\" returns successfully" Sep 4 17:33:03.791722 kubelet[3484]: I0904 17:33:03.791686 3484 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 17:33:03.791722 kubelet[3484]: I0904 17:33:03.791723 3484 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 17:33:03.904458 kubelet[3484]: I0904 17:33:03.904418 3484 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-d9dpf" podStartSLOduration=40.108746203 podCreationTimestamp="2024-09-04 17:32:21 +0000 UTC" firstStartedPulling="2024-09-04 17:33:00.150199405 +0000 UTC m=+54.589521561" lastFinishedPulling="2024-09-04 17:33:02.945823951 +0000 UTC m=+57.385146107" observedRunningTime="2024-09-04 17:33:03.903266439 +0000 UTC m=+58.342588595" watchObservedRunningTime="2024-09-04 17:33:03.904370749 +0000 UTC m=+58.343693005" Sep 4 17:33:05.649439 containerd[1805]: time="2024-09-04T17:33:05.649394129Z" level=info msg="StopPodSandbox for \"a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1\"" Sep 4 17:33:05.710350 containerd[1805]: 2024-09-04 17:33:05.682 [WARNING][5353] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--fh77d-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"06d0bcfd-0047-41da-912c-7db4d68edcca", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-eeaffe6a3f", ContainerID:"981b03683cda17cba6fe4b75cafe5c554117c8781512c7c0c9ca87cca0e7f027", Pod:"coredns-5dd5756b68-fh77d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28dd5d7b789", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:05.710350 containerd[1805]: 2024-09-04 17:33:05.683 [INFO][5353] k8s.go 608: Cleaning up netns ContainerID="a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" Sep 4 17:33:05.710350 containerd[1805]: 2024-09-04 17:33:05.683 [INFO][5353] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" iface="eth0" netns="" Sep 4 17:33:05.710350 containerd[1805]: 2024-09-04 17:33:05.683 [INFO][5353] k8s.go 615: Releasing IP address(es) ContainerID="a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" Sep 4 17:33:05.710350 containerd[1805]: 2024-09-04 17:33:05.683 [INFO][5353] utils.go 188: Calico CNI releasing IP address ContainerID="a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" Sep 4 17:33:05.710350 containerd[1805]: 2024-09-04 17:33:05.701 [INFO][5361] ipam_plugin.go 417: Releasing address using handleID ContainerID="a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" HandleID="k8s-pod-network.a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--fh77d-eth0" Sep 4 17:33:05.710350 containerd[1805]: 2024-09-04 17:33:05.702 [INFO][5361] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:05.710350 containerd[1805]: 2024-09-04 17:33:05.702 [INFO][5361] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:05.710350 containerd[1805]: 2024-09-04 17:33:05.707 [WARNING][5361] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" HandleID="k8s-pod-network.a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--fh77d-eth0" Sep 4 17:33:05.710350 containerd[1805]: 2024-09-04 17:33:05.707 [INFO][5361] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" HandleID="k8s-pod-network.a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--fh77d-eth0" Sep 4 17:33:05.710350 containerd[1805]: 2024-09-04 17:33:05.708 [INFO][5361] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:05.710350 containerd[1805]: 2024-09-04 17:33:05.709 [INFO][5353] k8s.go 621: Teardown processing complete. ContainerID="a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" Sep 4 17:33:05.710993 containerd[1805]: time="2024-09-04T17:33:05.710353501Z" level=info msg="TearDown network for sandbox \"a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1\" successfully" Sep 4 17:33:05.710993 containerd[1805]: time="2024-09-04T17:33:05.710385002Z" level=info msg="StopPodSandbox for \"a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1\" returns successfully" Sep 4 17:33:05.711076 containerd[1805]: time="2024-09-04T17:33:05.710991807Z" level=info msg="RemovePodSandbox for \"a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1\"" Sep 4 17:33:05.711076 containerd[1805]: time="2024-09-04T17:33:05.711026208Z" level=info msg="Forcibly stopping sandbox \"a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1\"" Sep 4 17:33:05.770784 containerd[1805]: 2024-09-04 17:33:05.740 [WARNING][5379] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--fh77d-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"06d0bcfd-0047-41da-912c-7db4d68edcca", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-eeaffe6a3f", ContainerID:"981b03683cda17cba6fe4b75cafe5c554117c8781512c7c0c9ca87cca0e7f027", Pod:"coredns-5dd5756b68-fh77d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28dd5d7b789", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:05.770784 containerd[1805]: 2024-09-04 17:33:05.740 [INFO][5379] k8s.go 608: Cleaning up netns ContainerID="a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" Sep 4 17:33:05.770784 containerd[1805]: 2024-09-04 17:33:05.740 [INFO][5379] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" iface="eth0" netns="" Sep 4 17:33:05.770784 containerd[1805]: 2024-09-04 17:33:05.740 [INFO][5379] k8s.go 615: Releasing IP address(es) ContainerID="a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" Sep 4 17:33:05.770784 containerd[1805]: 2024-09-04 17:33:05.740 [INFO][5379] utils.go 188: Calico CNI releasing IP address ContainerID="a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" Sep 4 17:33:05.770784 containerd[1805]: 2024-09-04 17:33:05.762 [INFO][5385] ipam_plugin.go 417: Releasing address using handleID ContainerID="a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" HandleID="k8s-pod-network.a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--fh77d-eth0" Sep 4 17:33:05.770784 containerd[1805]: 2024-09-04 17:33:05.762 [INFO][5385] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:05.770784 containerd[1805]: 2024-09-04 17:33:05.763 [INFO][5385] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:05.770784 containerd[1805]: 2024-09-04 17:33:05.767 [WARNING][5385] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" HandleID="k8s-pod-network.a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--fh77d-eth0" Sep 4 17:33:05.770784 containerd[1805]: 2024-09-04 17:33:05.767 [INFO][5385] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" HandleID="k8s-pod-network.a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--fh77d-eth0" Sep 4 17:33:05.770784 containerd[1805]: 2024-09-04 17:33:05.769 [INFO][5385] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:05.770784 containerd[1805]: 2024-09-04 17:33:05.769 [INFO][5379] k8s.go 621: Teardown processing complete. ContainerID="a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1" Sep 4 17:33:05.771444 containerd[1805]: time="2024-09-04T17:33:05.770810169Z" level=info msg="TearDown network for sandbox \"a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1\" successfully" Sep 4 17:33:05.781467 containerd[1805]: time="2024-09-04T17:33:05.781368068Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:33:05.781652 containerd[1805]: time="2024-09-04T17:33:05.781510069Z" level=info msg="RemovePodSandbox \"a91e4c1926f915e1dd177197dc69a36535647d82e1591aab87fe28edf956a9f1\" returns successfully" Sep 4 17:33:05.782035 containerd[1805]: time="2024-09-04T17:33:05.782004174Z" level=info msg="StopPodSandbox for \"17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474\"" Sep 4 17:33:05.839468 containerd[1805]: 2024-09-04 17:33:05.813 [WARNING][5403] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--pvrs8-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"11b141d0-410e-4509-a6be-b9306c94a513", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-eeaffe6a3f", ContainerID:"9fd525a1ee6f682edbb7b9822c9b836834b92746be75174aaf7625a6fef827de", Pod:"coredns-5dd5756b68-pvrs8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali604632e5446", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:05.839468 containerd[1805]: 2024-09-04 17:33:05.813 [INFO][5403] k8s.go 608: Cleaning up netns ContainerID="17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" Sep 4 17:33:05.839468 containerd[1805]: 2024-09-04 17:33:05.813 [INFO][5403] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" iface="eth0" netns="" Sep 4 17:33:05.839468 containerd[1805]: 2024-09-04 17:33:05.813 [INFO][5403] k8s.go 615: Releasing IP address(es) ContainerID="17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" Sep 4 17:33:05.839468 containerd[1805]: 2024-09-04 17:33:05.813 [INFO][5403] utils.go 188: Calico CNI releasing IP address ContainerID="17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" Sep 4 17:33:05.839468 containerd[1805]: 2024-09-04 17:33:05.831 [INFO][5409] ipam_plugin.go 417: Releasing address using handleID ContainerID="17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" HandleID="k8s-pod-network.17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--pvrs8-eth0" Sep 4 17:33:05.839468 containerd[1805]: 2024-09-04 17:33:05.832 [INFO][5409] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:05.839468 containerd[1805]: 2024-09-04 17:33:05.832 [INFO][5409] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:05.839468 containerd[1805]: 2024-09-04 17:33:05.836 [WARNING][5409] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" HandleID="k8s-pod-network.17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--pvrs8-eth0" Sep 4 17:33:05.839468 containerd[1805]: 2024-09-04 17:33:05.836 [INFO][5409] ipam_plugin.go 445: Releasing address using workloadID ContainerID="17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" HandleID="k8s-pod-network.17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--pvrs8-eth0" Sep 4 17:33:05.839468 containerd[1805]: 2024-09-04 17:33:05.837 [INFO][5409] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:05.839468 containerd[1805]: 2024-09-04 17:33:05.838 [INFO][5403] k8s.go 621: Teardown processing complete. ContainerID="17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" Sep 4 17:33:05.840364 containerd[1805]: time="2024-09-04T17:33:05.839526214Z" level=info msg="TearDown network for sandbox \"17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474\" successfully" Sep 4 17:33:05.840364 containerd[1805]: time="2024-09-04T17:33:05.839555814Z" level=info msg="StopPodSandbox for \"17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474\" returns successfully" Sep 4 17:33:05.840364 containerd[1805]: time="2024-09-04T17:33:05.840041519Z" level=info msg="RemovePodSandbox for \"17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474\"" Sep 4 17:33:05.840364 containerd[1805]: time="2024-09-04T17:33:05.840090119Z" level=info msg="Forcibly stopping sandbox \"17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474\"" Sep 4 17:33:05.897668 containerd[1805]: 2024-09-04 17:33:05.871 [WARNING][5427] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--pvrs8-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"11b141d0-410e-4509-a6be-b9306c94a513", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-eeaffe6a3f", ContainerID:"9fd525a1ee6f682edbb7b9822c9b836834b92746be75174aaf7625a6fef827de", Pod:"coredns-5dd5756b68-pvrs8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali604632e5446", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:05.897668 containerd[1805]: 2024-09-04 17:33:05.871 [INFO][5427] k8s.go 608: Cleaning up netns ContainerID="17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" Sep 4 17:33:05.897668 containerd[1805]: 2024-09-04 17:33:05.871 [INFO][5427] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" iface="eth0" netns="" Sep 4 17:33:05.897668 containerd[1805]: 2024-09-04 17:33:05.871 [INFO][5427] k8s.go 615: Releasing IP address(es) ContainerID="17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" Sep 4 17:33:05.897668 containerd[1805]: 2024-09-04 17:33:05.871 [INFO][5427] utils.go 188: Calico CNI releasing IP address ContainerID="17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" Sep 4 17:33:05.897668 containerd[1805]: 2024-09-04 17:33:05.889 [INFO][5433] ipam_plugin.go 417: Releasing address using handleID ContainerID="17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" HandleID="k8s-pod-network.17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--pvrs8-eth0" Sep 4 17:33:05.897668 containerd[1805]: 2024-09-04 17:33:05.889 [INFO][5433] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:05.897668 containerd[1805]: 2024-09-04 17:33:05.889 [INFO][5433] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:05.897668 containerd[1805]: 2024-09-04 17:33:05.894 [WARNING][5433] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" HandleID="k8s-pod-network.17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--pvrs8-eth0" Sep 4 17:33:05.897668 containerd[1805]: 2024-09-04 17:33:05.894 [INFO][5433] ipam_plugin.go 445: Releasing address using workloadID ContainerID="17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" HandleID="k8s-pod-network.17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-coredns--5dd5756b68--pvrs8-eth0" Sep 4 17:33:05.897668 containerd[1805]: 2024-09-04 17:33:05.895 [INFO][5433] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:05.897668 containerd[1805]: 2024-09-04 17:33:05.896 [INFO][5427] k8s.go 621: Teardown processing complete. ContainerID="17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474" Sep 4 17:33:05.898333 containerd[1805]: time="2024-09-04T17:33:05.897709460Z" level=info msg="TearDown network for sandbox \"17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474\" successfully" Sep 4 17:33:05.910151 containerd[1805]: time="2024-09-04T17:33:05.910095076Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:33:05.910344 containerd[1805]: time="2024-09-04T17:33:05.910155077Z" level=info msg="RemovePodSandbox \"17413eef435029190ef345855e9f6ef08c0aca97c17d8fbcfdf92b5bb2813474\" returns successfully" Sep 4 17:33:05.910648 containerd[1805]: time="2024-09-04T17:33:05.910589581Z" level=info msg="StopPodSandbox for \"86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709\"" Sep 4 17:33:05.969224 containerd[1805]: 2024-09-04 17:33:05.940 [WARNING][5451] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--eeaffe6a3f-k8s-csi--node--driver--d9dpf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0887fe36-7732-4bbf-b901-deca836854e8", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-eeaffe6a3f", ContainerID:"e01f0e6225097ccb8e5f2b37ee184fcc0390f3105a5aefd318d30147aabc1f51", Pod:"csi-node-driver-d9dpf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.87.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali2560bf120a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:05.969224 containerd[1805]: 2024-09-04 17:33:05.940 [INFO][5451] k8s.go 608: Cleaning up netns ContainerID="86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" Sep 4 17:33:05.969224 containerd[1805]: 2024-09-04 17:33:05.940 [INFO][5451] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" iface="eth0" netns="" Sep 4 17:33:05.969224 containerd[1805]: 2024-09-04 17:33:05.940 [INFO][5451] k8s.go 615: Releasing IP address(es) ContainerID="86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" Sep 4 17:33:05.969224 containerd[1805]: 2024-09-04 17:33:05.940 [INFO][5451] utils.go 188: Calico CNI releasing IP address ContainerID="86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" Sep 4 17:33:05.969224 containerd[1805]: 2024-09-04 17:33:05.960 [INFO][5457] ipam_plugin.go 417: Releasing address using handleID ContainerID="86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" HandleID="k8s-pod-network.86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-csi--node--driver--d9dpf-eth0" Sep 4 17:33:05.969224 containerd[1805]: 2024-09-04 17:33:05.960 [INFO][5457] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:05.969224 containerd[1805]: 2024-09-04 17:33:05.960 [INFO][5457] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:05.969224 containerd[1805]: 2024-09-04 17:33:05.966 [WARNING][5457] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" HandleID="k8s-pod-network.86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-csi--node--driver--d9dpf-eth0" Sep 4 17:33:05.969224 containerd[1805]: 2024-09-04 17:33:05.966 [INFO][5457] ipam_plugin.go 445: Releasing address using workloadID ContainerID="86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" HandleID="k8s-pod-network.86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-csi--node--driver--d9dpf-eth0" Sep 4 17:33:05.969224 containerd[1805]: 2024-09-04 17:33:05.967 [INFO][5457] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:05.969224 containerd[1805]: 2024-09-04 17:33:05.968 [INFO][5451] k8s.go 621: Teardown processing complete. ContainerID="86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" Sep 4 17:33:05.971056 containerd[1805]: time="2024-09-04T17:33:05.969284332Z" level=info msg="TearDown network for sandbox \"86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709\" successfully" Sep 4 17:33:05.971056 containerd[1805]: time="2024-09-04T17:33:05.969314632Z" level=info msg="StopPodSandbox for \"86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709\" returns successfully" Sep 4 17:33:05.971056 containerd[1805]: time="2024-09-04T17:33:05.970442243Z" level=info msg="RemovePodSandbox for \"86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709\"" Sep 4 17:33:05.971056 containerd[1805]: time="2024-09-04T17:33:05.970510743Z" level=info msg="Forcibly stopping sandbox \"86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709\"" Sep 4 17:33:06.030285 containerd[1805]: 2024-09-04 17:33:06.003 [WARNING][5475] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--eeaffe6a3f-k8s-csi--node--driver--d9dpf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0887fe36-7732-4bbf-b901-deca836854e8", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-eeaffe6a3f", ContainerID:"e01f0e6225097ccb8e5f2b37ee184fcc0390f3105a5aefd318d30147aabc1f51", Pod:"csi-node-driver-d9dpf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.87.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali2560bf120a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:06.030285 containerd[1805]: 2024-09-04 17:33:06.003 [INFO][5475] k8s.go 608: Cleaning up netns ContainerID="86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" Sep 4 17:33:06.030285 containerd[1805]: 2024-09-04 17:33:06.003 [INFO][5475] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" iface="eth0" netns="" Sep 4 17:33:06.030285 containerd[1805]: 2024-09-04 17:33:06.003 [INFO][5475] k8s.go 615: Releasing IP address(es) ContainerID="86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" Sep 4 17:33:06.030285 containerd[1805]: 2024-09-04 17:33:06.003 [INFO][5475] utils.go 188: Calico CNI releasing IP address ContainerID="86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" Sep 4 17:33:06.030285 containerd[1805]: 2024-09-04 17:33:06.022 [INFO][5481] ipam_plugin.go 417: Releasing address using handleID ContainerID="86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" HandleID="k8s-pod-network.86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-csi--node--driver--d9dpf-eth0" Sep 4 17:33:06.030285 containerd[1805]: 2024-09-04 17:33:06.022 [INFO][5481] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:06.030285 containerd[1805]: 2024-09-04 17:33:06.022 [INFO][5481] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:06.030285 containerd[1805]: 2024-09-04 17:33:06.026 [WARNING][5481] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" HandleID="k8s-pod-network.86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-csi--node--driver--d9dpf-eth0" Sep 4 17:33:06.030285 containerd[1805]: 2024-09-04 17:33:06.027 [INFO][5481] ipam_plugin.go 445: Releasing address using workloadID ContainerID="86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" HandleID="k8s-pod-network.86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-csi--node--driver--d9dpf-eth0" Sep 4 17:33:06.030285 containerd[1805]: 2024-09-04 17:33:06.028 [INFO][5481] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:06.030285 containerd[1805]: 2024-09-04 17:33:06.029 [INFO][5475] k8s.go 621: Teardown processing complete. ContainerID="86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709" Sep 4 17:33:06.030917 containerd[1805]: time="2024-09-04T17:33:06.030313005Z" level=info msg="TearDown network for sandbox \"86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709\" successfully" Sep 4 17:33:06.036038 containerd[1805]: time="2024-09-04T17:33:06.035995858Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:33:06.036162 containerd[1805]: time="2024-09-04T17:33:06.036070459Z" level=info msg="RemovePodSandbox \"86e138b8bce3fe796a51ba75f6ea284c644fdd8fafa9a448b61644a48f632709\" returns successfully" Sep 4 17:33:06.036651 containerd[1805]: time="2024-09-04T17:33:06.036614664Z" level=info msg="StopPodSandbox for \"c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4\"" Sep 4 17:33:06.097383 containerd[1805]: 2024-09-04 17:33:06.069 [WARNING][5499] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--eeaffe6a3f-k8s-calico--kube--controllers--5644997dc9--nplr7-eth0", GenerateName:"calico-kube-controllers-5644997dc9-", Namespace:"calico-system", SelfLink:"", UID:"f0f33dc0-57aa-4818-afaf-a2d00c3723b4", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5644997dc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-eeaffe6a3f", ContainerID:"9e577a64cd81640d60ba531b841fea89e31990b6e57963ed29c2c82758d2ae52", Pod:"calico-kube-controllers-5644997dc9-nplr7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1a3e6f5c76e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:06.097383 containerd[1805]: 2024-09-04 17:33:06.069 [INFO][5499] k8s.go 608: Cleaning up netns ContainerID="c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" Sep 4 17:33:06.097383 containerd[1805]: 2024-09-04 17:33:06.069 [INFO][5499] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" iface="eth0" netns="" Sep 4 17:33:06.097383 containerd[1805]: 2024-09-04 17:33:06.069 [INFO][5499] k8s.go 615: Releasing IP address(es) ContainerID="c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" Sep 4 17:33:06.097383 containerd[1805]: 2024-09-04 17:33:06.069 [INFO][5499] utils.go 188: Calico CNI releasing IP address ContainerID="c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" Sep 4 17:33:06.097383 containerd[1805]: 2024-09-04 17:33:06.087 [INFO][5505] ipam_plugin.go 417: Releasing address using handleID ContainerID="c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" HandleID="k8s-pod-network.c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--kube--controllers--5644997dc9--nplr7-eth0" Sep 4 17:33:06.097383 containerd[1805]: 2024-09-04 17:33:06.088 [INFO][5505] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:06.097383 containerd[1805]: 2024-09-04 17:33:06.088 [INFO][5505] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:06.097383 containerd[1805]: 2024-09-04 17:33:06.093 [WARNING][5505] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" HandleID="k8s-pod-network.c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--kube--controllers--5644997dc9--nplr7-eth0" Sep 4 17:33:06.097383 containerd[1805]: 2024-09-04 17:33:06.093 [INFO][5505] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" HandleID="k8s-pod-network.c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--kube--controllers--5644997dc9--nplr7-eth0" Sep 4 17:33:06.097383 containerd[1805]: 2024-09-04 17:33:06.095 [INFO][5505] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:06.097383 containerd[1805]: 2024-09-04 17:33:06.096 [INFO][5499] k8s.go 621: Teardown processing complete. ContainerID="c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" Sep 4 17:33:06.097970 containerd[1805]: time="2024-09-04T17:33:06.097408934Z" level=info msg="TearDown network for sandbox \"c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4\" successfully" Sep 4 17:33:06.097970 containerd[1805]: time="2024-09-04T17:33:06.097436735Z" level=info msg="StopPodSandbox for \"c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4\" returns successfully" Sep 4 17:33:06.097970 containerd[1805]: time="2024-09-04T17:33:06.097900739Z" level=info msg="RemovePodSandbox for \"c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4\"" Sep 4 17:33:06.097970 containerd[1805]: time="2024-09-04T17:33:06.097937239Z" level=info msg="Forcibly stopping sandbox \"c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4\"" Sep 4 17:33:06.154883 containerd[1805]: 2024-09-04 17:33:06.128 [WARNING][5523] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--eeaffe6a3f-k8s-calico--kube--controllers--5644997dc9--nplr7-eth0", GenerateName:"calico-kube-controllers-5644997dc9-", Namespace:"calico-system", SelfLink:"", UID:"f0f33dc0-57aa-4818-afaf-a2d00c3723b4", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5644997dc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-eeaffe6a3f", ContainerID:"9e577a64cd81640d60ba531b841fea89e31990b6e57963ed29c2c82758d2ae52", Pod:"calico-kube-controllers-5644997dc9-nplr7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1a3e6f5c76e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:06.154883 containerd[1805]: 2024-09-04 17:33:06.128 [INFO][5523] k8s.go 608: Cleaning up netns ContainerID="c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" Sep 4 17:33:06.154883 containerd[1805]: 2024-09-04 17:33:06.128 [INFO][5523] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" iface="eth0" netns="" Sep 4 17:33:06.154883 containerd[1805]: 2024-09-04 17:33:06.128 [INFO][5523] k8s.go 615: Releasing IP address(es) ContainerID="c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" Sep 4 17:33:06.154883 containerd[1805]: 2024-09-04 17:33:06.128 [INFO][5523] utils.go 188: Calico CNI releasing IP address ContainerID="c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" Sep 4 17:33:06.154883 containerd[1805]: 2024-09-04 17:33:06.146 [INFO][5529] ipam_plugin.go 417: Releasing address using handleID ContainerID="c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" HandleID="k8s-pod-network.c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--kube--controllers--5644997dc9--nplr7-eth0" Sep 4 17:33:06.154883 containerd[1805]: 2024-09-04 17:33:06.146 [INFO][5529] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:06.154883 containerd[1805]: 2024-09-04 17:33:06.146 [INFO][5529] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:06.154883 containerd[1805]: 2024-09-04 17:33:06.151 [WARNING][5529] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" HandleID="k8s-pod-network.c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--kube--controllers--5644997dc9--nplr7-eth0" Sep 4 17:33:06.154883 containerd[1805]: 2024-09-04 17:33:06.152 [INFO][5529] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" HandleID="k8s-pod-network.c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--kube--controllers--5644997dc9--nplr7-eth0" Sep 4 17:33:06.154883 containerd[1805]: 2024-09-04 17:33:06.153 [INFO][5529] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:06.154883 containerd[1805]: 2024-09-04 17:33:06.153 [INFO][5523] k8s.go 621: Teardown processing complete. ContainerID="c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4" Sep 4 17:33:06.154883 containerd[1805]: time="2024-09-04T17:33:06.154869574Z" level=info msg="TearDown network for sandbox \"c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4\" successfully" Sep 4 17:33:06.164838 containerd[1805]: time="2024-09-04T17:33:06.164740067Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:33:06.164936 containerd[1805]: time="2024-09-04T17:33:06.164845467Z" level=info msg="RemovePodSandbox \"c7bc8517e117ed12d8d0a18dc5f8cca26b24b10e31d576759f7c3a2fe506b0a4\" returns successfully" Sep 4 17:33:11.269339 kubelet[3484]: I0904 17:33:11.266208 3484 topology_manager.go:215] "Topology Admit Handler" podUID="2619f097-78c0-4f2c-8b6e-0356db01a441" podNamespace="calico-apiserver" podName="calico-apiserver-f76555d76-dmhbf" Sep 4 17:33:11.292475 kubelet[3484]: I0904 17:33:11.292449 3484 topology_manager.go:215] "Topology Admit Handler" podUID="a4f7f86c-0266-44b2-b89d-a67b43669522" podNamespace="calico-apiserver" podName="calico-apiserver-f76555d76-h5zdl" Sep 4 17:33:11.378026 kubelet[3484]: I0904 17:33:11.377990 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppw6h\" (UniqueName: \"kubernetes.io/projected/2619f097-78c0-4f2c-8b6e-0356db01a441-kube-api-access-ppw6h\") pod \"calico-apiserver-f76555d76-dmhbf\" (UID: \"2619f097-78c0-4f2c-8b6e-0356db01a441\") " pod="calico-apiserver/calico-apiserver-f76555d76-dmhbf" Sep 4 17:33:11.378314 kubelet[3484]: I0904 17:33:11.378058 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2619f097-78c0-4f2c-8b6e-0356db01a441-calico-apiserver-certs\") pod \"calico-apiserver-f76555d76-dmhbf\" (UID: \"2619f097-78c0-4f2c-8b6e-0356db01a441\") " pod="calico-apiserver/calico-apiserver-f76555d76-dmhbf" Sep 4 17:33:11.479030 kubelet[3484]: I0904 17:33:11.478883 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldwpt\" (UniqueName: \"kubernetes.io/projected/a4f7f86c-0266-44b2-b89d-a67b43669522-kube-api-access-ldwpt\") pod \"calico-apiserver-f76555d76-h5zdl\" (UID: \"a4f7f86c-0266-44b2-b89d-a67b43669522\") " pod="calico-apiserver/calico-apiserver-f76555d76-h5zdl" Sep 4 17:33:11.479784 kubelet[3484]: I0904 17:33:11.479269 3484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a4f7f86c-0266-44b2-b89d-a67b43669522-calico-apiserver-certs\") pod \"calico-apiserver-f76555d76-h5zdl\" (UID: \"a4f7f86c-0266-44b2-b89d-a67b43669522\") " pod="calico-apiserver/calico-apiserver-f76555d76-h5zdl" Sep 4 17:33:11.479784 kubelet[3484]: E0904 17:33:11.479522 3484 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Sep 4 17:33:11.479784 kubelet[3484]: E0904 17:33:11.479599 3484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2619f097-78c0-4f2c-8b6e-0356db01a441-calico-apiserver-certs podName:2619f097-78c0-4f2c-8b6e-0356db01a441 nodeName:}" failed. No retries permitted until 2024-09-04 17:33:11.979564769 +0000 UTC m=+66.418886925 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/2619f097-78c0-4f2c-8b6e-0356db01a441-calico-apiserver-certs") pod "calico-apiserver-f76555d76-dmhbf" (UID: "2619f097-78c0-4f2c-8b6e-0356db01a441") : secret "calico-apiserver-certs" not found Sep 4 17:33:11.605814 containerd[1805]: time="2024-09-04T17:33:11.604646109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f76555d76-h5zdl,Uid:a4f7f86c-0266-44b2-b89d-a67b43669522,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:33:11.734144 systemd-networkd[1373]: cali8a54a5e4c56: Link UP Sep 4 17:33:11.734432 systemd-networkd[1373]: cali8a54a5e4c56: Gained carrier Sep 4 17:33:11.759821 containerd[1805]: 2024-09-04 17:33:11.674 [INFO][5598] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--a--eeaffe6a3f-k8s-calico--apiserver--f76555d76--h5zdl-eth0 calico-apiserver-f76555d76- calico-apiserver a4f7f86c-0266-44b2-b89d-a67b43669522 841 0 2024-09-04 17:33:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f76555d76 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975.2.1-a-eeaffe6a3f calico-apiserver-f76555d76-h5zdl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8a54a5e4c56 [] []}} ContainerID="ca657ae2a2efcde8948fe976b73d09bbdfd4186f89b8ea794bc328aab5c2e575" Namespace="calico-apiserver" Pod="calico-apiserver-f76555d76-h5zdl" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--apiserver--f76555d76--h5zdl-" Sep 4 17:33:11.759821 containerd[1805]: 2024-09-04 17:33:11.674 [INFO][5598] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ca657ae2a2efcde8948fe976b73d09bbdfd4186f89b8ea794bc328aab5c2e575" Namespace="calico-apiserver" Pod="calico-apiserver-f76555d76-h5zdl" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--apiserver--f76555d76--h5zdl-eth0" Sep 4 17:33:11.759821 containerd[1805]: 2024-09-04 17:33:11.699 [INFO][5610] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca657ae2a2efcde8948fe976b73d09bbdfd4186f89b8ea794bc328aab5c2e575" HandleID="k8s-pod-network.ca657ae2a2efcde8948fe976b73d09bbdfd4186f89b8ea794bc328aab5c2e575" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--apiserver--f76555d76--h5zdl-eth0" Sep 4 17:33:11.759821 containerd[1805]: 2024-09-04 17:33:11.706 [INFO][5610] ipam_plugin.go 270: Auto assigning IP ContainerID="ca657ae2a2efcde8948fe976b73d09bbdfd4186f89b8ea794bc328aab5c2e575" HandleID="k8s-pod-network.ca657ae2a2efcde8948fe976b73d09bbdfd4186f89b8ea794bc328aab5c2e575" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--apiserver--f76555d76--h5zdl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000265ef0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975.2.1-a-eeaffe6a3f", "pod":"calico-apiserver-f76555d76-h5zdl", "timestamp":"2024-09-04 17:33:11.699952078 +0000 UTC"}, Hostname:"ci-3975.2.1-a-eeaffe6a3f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:33:11.759821 containerd[1805]: 2024-09-04 17:33:11.706 [INFO][5610] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:11.759821 containerd[1805]: 2024-09-04 17:33:11.707 [INFO][5610] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:11.759821 containerd[1805]: 2024-09-04 17:33:11.707 [INFO][5610] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-a-eeaffe6a3f' Sep 4 17:33:11.759821 containerd[1805]: 2024-09-04 17:33:11.708 [INFO][5610] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ca657ae2a2efcde8948fe976b73d09bbdfd4186f89b8ea794bc328aab5c2e575" host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:33:11.759821 containerd[1805]: 2024-09-04 17:33:11.713 [INFO][5610] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:33:11.759821 containerd[1805]: 2024-09-04 17:33:11.716 [INFO][5610] ipam.go 489: Trying affinity for 192.168.87.0/26 host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:33:11.759821 containerd[1805]: 2024-09-04 17:33:11.717 [INFO][5610] ipam.go 155: Attempting to load block cidr=192.168.87.0/26 host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:33:11.759821 containerd[1805]: 2024-09-04 17:33:11.719 [INFO][5610] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:33:11.759821 containerd[1805]: 2024-09-04 17:33:11.719 [INFO][5610] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.ca657ae2a2efcde8948fe976b73d09bbdfd4186f89b8ea794bc328aab5c2e575" host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:33:11.759821 containerd[1805]: 2024-09-04 17:33:11.720 [INFO][5610] ipam.go 1685: Creating new handle: k8s-pod-network.ca657ae2a2efcde8948fe976b73d09bbdfd4186f89b8ea794bc328aab5c2e575 Sep 4 17:33:11.759821 containerd[1805]: 2024-09-04 17:33:11.723 [INFO][5610] ipam.go 1203: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.ca657ae2a2efcde8948fe976b73d09bbdfd4186f89b8ea794bc328aab5c2e575" host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:33:11.759821 containerd[1805]: 2024-09-04 17:33:11.727 [INFO][5610] ipam.go 1216: Successfully claimed IPs: [192.168.87.5/26] block=192.168.87.0/26 handle="k8s-pod-network.ca657ae2a2efcde8948fe976b73d09bbdfd4186f89b8ea794bc328aab5c2e575" host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:33:11.759821 containerd[1805]: 2024-09-04 17:33:11.727 [INFO][5610] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.5/26] handle="k8s-pod-network.ca657ae2a2efcde8948fe976b73d09bbdfd4186f89b8ea794bc328aab5c2e575" host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:33:11.759821 containerd[1805]: 2024-09-04 17:33:11.727 [INFO][5610] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:11.759821 containerd[1805]: 2024-09-04 17:33:11.727 [INFO][5610] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.87.5/26] IPv6=[] ContainerID="ca657ae2a2efcde8948fe976b73d09bbdfd4186f89b8ea794bc328aab5c2e575" HandleID="k8s-pod-network.ca657ae2a2efcde8948fe976b73d09bbdfd4186f89b8ea794bc328aab5c2e575" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--apiserver--f76555d76--h5zdl-eth0" Sep 4 17:33:11.765609 containerd[1805]: 2024-09-04 17:33:11.730 [INFO][5598] k8s.go 386: Populated endpoint ContainerID="ca657ae2a2efcde8948fe976b73d09bbdfd4186f89b8ea794bc328aab5c2e575" Namespace="calico-apiserver" Pod="calico-apiserver-f76555d76-h5zdl" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--apiserver--f76555d76--h5zdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--eeaffe6a3f-k8s-calico--apiserver--f76555d76--h5zdl-eth0", GenerateName:"calico-apiserver-f76555d76-", Namespace:"calico-apiserver", SelfLink:"", UID:"a4f7f86c-0266-44b2-b89d-a67b43669522", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f76555d76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-eeaffe6a3f", ContainerID:"", Pod:"calico-apiserver-f76555d76-h5zdl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8a54a5e4c56", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:11.765609 containerd[1805]: 2024-09-04 17:33:11.730 [INFO][5598] k8s.go 387: Calico CNI using IPs: [192.168.87.5/32] ContainerID="ca657ae2a2efcde8948fe976b73d09bbdfd4186f89b8ea794bc328aab5c2e575" Namespace="calico-apiserver" Pod="calico-apiserver-f76555d76-h5zdl" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--apiserver--f76555d76--h5zdl-eth0" Sep 4 17:33:11.765609 containerd[1805]: 2024-09-04 17:33:11.730 [INFO][5598] dataplane_linux.go 68: Setting the host side veth name to cali8a54a5e4c56 ContainerID="ca657ae2a2efcde8948fe976b73d09bbdfd4186f89b8ea794bc328aab5c2e575" Namespace="calico-apiserver" Pod="calico-apiserver-f76555d76-h5zdl" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--apiserver--f76555d76--h5zdl-eth0" Sep 4 17:33:11.765609 containerd[1805]: 2024-09-04 17:33:11.734 [INFO][5598] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ca657ae2a2efcde8948fe976b73d09bbdfd4186f89b8ea794bc328aab5c2e575" Namespace="calico-apiserver" Pod="calico-apiserver-f76555d76-h5zdl" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--apiserver--f76555d76--h5zdl-eth0" Sep 4 17:33:11.765609 containerd[1805]: 2024-09-04 17:33:11.735 [INFO][5598] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ca657ae2a2efcde8948fe976b73d09bbdfd4186f89b8ea794bc328aab5c2e575" Namespace="calico-apiserver" Pod="calico-apiserver-f76555d76-h5zdl" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--apiserver--f76555d76--h5zdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--eeaffe6a3f-k8s-calico--apiserver--f76555d76--h5zdl-eth0", GenerateName:"calico-apiserver-f76555d76-", Namespace:"calico-apiserver", SelfLink:"", UID:"a4f7f86c-0266-44b2-b89d-a67b43669522", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f76555d76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-eeaffe6a3f", ContainerID:"ca657ae2a2efcde8948fe976b73d09bbdfd4186f89b8ea794bc328aab5c2e575", Pod:"calico-apiserver-f76555d76-h5zdl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8a54a5e4c56", MAC:"c2:12:c6:46:a5:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:11.765609 containerd[1805]: 2024-09-04 17:33:11.755 [INFO][5598] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ca657ae2a2efcde8948fe976b73d09bbdfd4186f89b8ea794bc328aab5c2e575" Namespace="calico-apiserver" Pod="calico-apiserver-f76555d76-h5zdl" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--apiserver--f76555d76--h5zdl-eth0" Sep 4 17:33:11.792303 containerd[1805]: time="2024-09-04T17:33:11.791417912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:33:11.792459 containerd[1805]: time="2024-09-04T17:33:11.792313420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:11.792544 containerd[1805]: time="2024-09-04T17:33:11.792455721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:33:11.792544 containerd[1805]: time="2024-09-04T17:33:11.792501022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:11.860528 containerd[1805]: time="2024-09-04T17:33:11.860152438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f76555d76-h5zdl,Uid:a4f7f86c-0266-44b2-b89d-a67b43669522,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ca657ae2a2efcde8948fe976b73d09bbdfd4186f89b8ea794bc328aab5c2e575\"" Sep 4 17:33:11.863720 containerd[1805]: time="2024-09-04T17:33:11.863691771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:33:12.178348 containerd[1805]: time="2024-09-04T17:33:12.178295838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f76555d76-dmhbf,Uid:2619f097-78c0-4f2c-8b6e-0356db01a441,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:33:12.305034 systemd-networkd[1373]: calibbb6ad16282: Link UP Sep 4 17:33:12.305934 systemd-networkd[1373]: calibbb6ad16282: Gained carrier Sep 4 17:33:12.319514 containerd[1805]: 2024-09-04 17:33:12.243 [INFO][5675] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--a--eeaffe6a3f-k8s-calico--apiserver--f76555d76--dmhbf-eth0 calico-apiserver-f76555d76- calico-apiserver 2619f097-78c0-4f2c-8b6e-0356db01a441 839 0 2024-09-04 17:33:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f76555d76 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975.2.1-a-eeaffe6a3f calico-apiserver-f76555d76-dmhbf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibbb6ad16282 [] []}} ContainerID="61ef1b5177cbc1b4cab71e6846fac980c7d9ab97b76dbb6e8c9e64ecbf7e250c" Namespace="calico-apiserver" Pod="calico-apiserver-f76555d76-dmhbf" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--apiserver--f76555d76--dmhbf-" Sep 4 17:33:12.319514 containerd[1805]: 2024-09-04 17:33:12.243 [INFO][5675] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="61ef1b5177cbc1b4cab71e6846fac980c7d9ab97b76dbb6e8c9e64ecbf7e250c" Namespace="calico-apiserver" Pod="calico-apiserver-f76555d76-dmhbf" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--apiserver--f76555d76--dmhbf-eth0" Sep 4 17:33:12.319514 containerd[1805]: 2024-09-04 17:33:12.269 [INFO][5682] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61ef1b5177cbc1b4cab71e6846fac980c7d9ab97b76dbb6e8c9e64ecbf7e250c" HandleID="k8s-pod-network.61ef1b5177cbc1b4cab71e6846fac980c7d9ab97b76dbb6e8c9e64ecbf7e250c" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--apiserver--f76555d76--dmhbf-eth0" Sep 4 17:33:12.319514 containerd[1805]: 2024-09-04 17:33:12.276 [INFO][5682] ipam_plugin.go 270: Auto assigning IP ContainerID="61ef1b5177cbc1b4cab71e6846fac980c7d9ab97b76dbb6e8c9e64ecbf7e250c" HandleID="k8s-pod-network.61ef1b5177cbc1b4cab71e6846fac980c7d9ab97b76dbb6e8c9e64ecbf7e250c" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--apiserver--f76555d76--dmhbf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030a790), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975.2.1-a-eeaffe6a3f", "pod":"calico-apiserver-f76555d76-dmhbf", "timestamp":"2024-09-04 17:33:12.269881373 +0000 UTC"}, Hostname:"ci-3975.2.1-a-eeaffe6a3f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:33:12.319514 containerd[1805]: 2024-09-04 17:33:12.276 [INFO][5682] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:12.319514 containerd[1805]: 2024-09-04 17:33:12.276 [INFO][5682] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:12.319514 containerd[1805]: 2024-09-04 17:33:12.276 [INFO][5682] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-a-eeaffe6a3f' Sep 4 17:33:12.319514 containerd[1805]: 2024-09-04 17:33:12.277 [INFO][5682] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.61ef1b5177cbc1b4cab71e6846fac980c7d9ab97b76dbb6e8c9e64ecbf7e250c" host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:33:12.319514 containerd[1805]: 2024-09-04 17:33:12.281 [INFO][5682] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:33:12.319514 containerd[1805]: 2024-09-04 17:33:12.284 [INFO][5682] ipam.go 489: Trying affinity for 192.168.87.0/26 host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:33:12.319514 containerd[1805]: 2024-09-04 17:33:12.286 [INFO][5682] ipam.go 155: Attempting to load block cidr=192.168.87.0/26 host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:33:12.319514 containerd[1805]: 2024-09-04 17:33:12.288 [INFO][5682] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:33:12.319514 containerd[1805]: 2024-09-04 17:33:12.288 [INFO][5682] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.61ef1b5177cbc1b4cab71e6846fac980c7d9ab97b76dbb6e8c9e64ecbf7e250c" host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:33:12.319514 containerd[1805]: 2024-09-04 17:33:12.289 [INFO][5682] ipam.go 1685: Creating new handle: k8s-pod-network.61ef1b5177cbc1b4cab71e6846fac980c7d9ab97b76dbb6e8c9e64ecbf7e250c Sep 4 17:33:12.319514 containerd[1805]: 2024-09-04 17:33:12.295 [INFO][5682] ipam.go 1203: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.61ef1b5177cbc1b4cab71e6846fac980c7d9ab97b76dbb6e8c9e64ecbf7e250c" host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:33:12.319514 containerd[1805]: 2024-09-04 17:33:12.300 [INFO][5682] ipam.go 1216: Successfully claimed IPs: [192.168.87.6/26] block=192.168.87.0/26 handle="k8s-pod-network.61ef1b5177cbc1b4cab71e6846fac980c7d9ab97b76dbb6e8c9e64ecbf7e250c" host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:33:12.319514 containerd[1805]: 2024-09-04 17:33:12.300 [INFO][5682] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.6/26] handle="k8s-pod-network.61ef1b5177cbc1b4cab71e6846fac980c7d9ab97b76dbb6e8c9e64ecbf7e250c" host="ci-3975.2.1-a-eeaffe6a3f" Sep 4 17:33:12.319514 containerd[1805]: 2024-09-04 17:33:12.300 [INFO][5682] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:12.319514 containerd[1805]: 2024-09-04 17:33:12.300 [INFO][5682] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.87.6/26] IPv6=[] ContainerID="61ef1b5177cbc1b4cab71e6846fac980c7d9ab97b76dbb6e8c9e64ecbf7e250c" HandleID="k8s-pod-network.61ef1b5177cbc1b4cab71e6846fac980c7d9ab97b76dbb6e8c9e64ecbf7e250c" Workload="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--apiserver--f76555d76--dmhbf-eth0" Sep 4 17:33:12.320452 containerd[1805]: 2024-09-04 17:33:12.302 [INFO][5675] k8s.go 386: Populated endpoint ContainerID="61ef1b5177cbc1b4cab71e6846fac980c7d9ab97b76dbb6e8c9e64ecbf7e250c" Namespace="calico-apiserver" Pod="calico-apiserver-f76555d76-dmhbf" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--apiserver--f76555d76--dmhbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--eeaffe6a3f-k8s-calico--apiserver--f76555d76--dmhbf-eth0", GenerateName:"calico-apiserver-f76555d76-", Namespace:"calico-apiserver", SelfLink:"", UID:"2619f097-78c0-4f2c-8b6e-0356db01a441", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f76555d76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-eeaffe6a3f", ContainerID:"", Pod:"calico-apiserver-f76555d76-dmhbf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibbb6ad16282", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:12.320452 containerd[1805]: 2024-09-04 17:33:12.302 [INFO][5675] k8s.go 387: Calico CNI using IPs: [192.168.87.6/32] ContainerID="61ef1b5177cbc1b4cab71e6846fac980c7d9ab97b76dbb6e8c9e64ecbf7e250c" Namespace="calico-apiserver" Pod="calico-apiserver-f76555d76-dmhbf" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--apiserver--f76555d76--dmhbf-eth0" Sep 4 17:33:12.320452 containerd[1805]: 2024-09-04 17:33:12.302 [INFO][5675] dataplane_linux.go 68: Setting the host side veth name to calibbb6ad16282 ContainerID="61ef1b5177cbc1b4cab71e6846fac980c7d9ab97b76dbb6e8c9e64ecbf7e250c" Namespace="calico-apiserver" Pod="calico-apiserver-f76555d76-dmhbf" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--apiserver--f76555d76--dmhbf-eth0" Sep 4 17:33:12.320452 containerd[1805]: 2024-09-04 17:33:12.306 [INFO][5675] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="61ef1b5177cbc1b4cab71e6846fac980c7d9ab97b76dbb6e8c9e64ecbf7e250c" Namespace="calico-apiserver" Pod="calico-apiserver-f76555d76-dmhbf" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--apiserver--f76555d76--dmhbf-eth0" Sep 4 17:33:12.320452 containerd[1805]: 2024-09-04 17:33:12.306 [INFO][5675] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="61ef1b5177cbc1b4cab71e6846fac980c7d9ab97b76dbb6e8c9e64ecbf7e250c" Namespace="calico-apiserver" Pod="calico-apiserver-f76555d76-dmhbf" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--apiserver--f76555d76--dmhbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--eeaffe6a3f-k8s-calico--apiserver--f76555d76--dmhbf-eth0", GenerateName:"calico-apiserver-f76555d76-", Namespace:"calico-apiserver", SelfLink:"", UID:"2619f097-78c0-4f2c-8b6e-0356db01a441", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f76555d76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-eeaffe6a3f", ContainerID:"61ef1b5177cbc1b4cab71e6846fac980c7d9ab97b76dbb6e8c9e64ecbf7e250c", Pod:"calico-apiserver-f76555d76-dmhbf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibbb6ad16282", MAC:"fa:9a:50:d9:87:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:12.320452 containerd[1805]: 2024-09-04 17:33:12.315 [INFO][5675] k8s.go 500: Wrote updated endpoint to datastore ContainerID="61ef1b5177cbc1b4cab71e6846fac980c7d9ab97b76dbb6e8c9e64ecbf7e250c" Namespace="calico-apiserver" Pod="calico-apiserver-f76555d76-dmhbf" WorkloadEndpoint="ci--3975.2.1--a--eeaffe6a3f-k8s-calico--apiserver--f76555d76--dmhbf-eth0" Sep 4 17:33:12.360583 containerd[1805]: time="2024-09-04T17:33:12.359723192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:33:12.360583 containerd[1805]: time="2024-09-04T17:33:12.360322097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:12.360583 containerd[1805]: time="2024-09-04T17:33:12.360345297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:33:12.360583 containerd[1805]: time="2024-09-04T17:33:12.360359997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:12.424620 containerd[1805]: time="2024-09-04T17:33:12.424580383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f76555d76-dmhbf,Uid:2619f097-78c0-4f2c-8b6e-0356db01a441,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"61ef1b5177cbc1b4cab71e6846fac980c7d9ab97b76dbb6e8c9e64ecbf7e250c\"" Sep 4 17:33:12.800411 systemd-networkd[1373]: cali8a54a5e4c56: Gained IPv6LL Sep 4 17:33:13.824436 systemd-networkd[1373]: calibbb6ad16282: Gained IPv6LL Sep 4 17:33:14.767150 containerd[1805]: time="2024-09-04T17:33:14.767101234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:14.769242 containerd[1805]: time="2024-09-04T17:33:14.769168952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Sep 4 17:33:14.772932 containerd[1805]: time="2024-09-04T17:33:14.772868886Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:14.778662 containerd[1805]: time="2024-09-04T17:33:14.778609638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:14.779544 containerd[1805]: time="2024-09-04T17:33:14.779401646Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 2.915509474s" Sep 4 17:33:14.779544 containerd[1805]: time="2024-09-04T17:33:14.779437946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 4 17:33:14.781706 containerd[1805]: time="2024-09-04T17:33:14.780529556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:33:14.782260 containerd[1805]: time="2024-09-04T17:33:14.782208571Z" level=info msg="CreateContainer within sandbox \"ca657ae2a2efcde8948fe976b73d09bbdfd4186f89b8ea794bc328aab5c2e575\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:33:14.830338 containerd[1805]: time="2024-09-04T17:33:14.830307210Z" level=info msg="CreateContainer within sandbox \"ca657ae2a2efcde8948fe976b73d09bbdfd4186f89b8ea794bc328aab5c2e575\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"332a21cf6cc5e0874e7040368e301d25af6548947d3638aa6e468e5b7abe9cf4\"" Sep 4 17:33:14.830785 containerd[1805]: time="2024-09-04T17:33:14.830744914Z" level=info msg="StartContainer for \"332a21cf6cc5e0874e7040368e301d25af6548947d3638aa6e468e5b7abe9cf4\"" Sep 4 17:33:14.911031 containerd[1805]: time="2024-09-04T17:33:14.910947645Z" level=info msg="StartContainer for \"332a21cf6cc5e0874e7040368e301d25af6548947d3638aa6e468e5b7abe9cf4\" returns successfully" Sep 4 17:33:14.955958 kubelet[3484]: I0904 17:33:14.953855 3484 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f76555d76-h5zdl" podStartSLOduration=1.037117451 podCreationTimestamp="2024-09-04 17:33:11 +0000 UTC" firstStartedPulling="2024-09-04 17:33:11.863288967 +0000 UTC m=+66.302611123" lastFinishedPulling="2024-09-04 17:33:14.779980351 +0000 UTC m=+69.219302507" observedRunningTime="2024-09-04 17:33:14.952403222 +0000 UTC m=+69.391725478" watchObservedRunningTime="2024-09-04 17:33:14.953808835 +0000 UTC m=+69.393131091" Sep 4 17:33:15.099022 containerd[1805]: time="2024-09-04T17:33:15.098899258Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:15.104761 containerd[1805]: time="2024-09-04T17:33:15.103881003Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=77" Sep 4 17:33:15.108256 containerd[1805]: time="2024-09-04T17:33:15.106307525Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 325.734169ms" Sep 4 17:33:15.108256 containerd[1805]: time="2024-09-04T17:33:15.106357026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 4 17:33:15.110348 containerd[1805]: time="2024-09-04T17:33:15.110315762Z" level=info msg="CreateContainer within sandbox \"61ef1b5177cbc1b4cab71e6846fac980c7d9ab97b76dbb6e8c9e64ecbf7e250c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:33:15.144930 containerd[1805]: time="2024-09-04T17:33:15.144894277Z" level=info msg="CreateContainer within sandbox \"61ef1b5177cbc1b4cab71e6846fac980c7d9ab97b76dbb6e8c9e64ecbf7e250c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7a53ad451302c5d3c10e641788b573de6572ab8674dfd354fa9eff6aba44942d\"" Sep 4 17:33:15.145446 containerd[1805]: time="2024-09-04T17:33:15.145410782Z" level=info msg="StartContainer for \"7a53ad451302c5d3c10e641788b573de6572ab8674dfd354fa9eff6aba44942d\"" Sep 4 17:33:15.245975 containerd[1805]: time="2024-09-04T17:33:15.245936998Z" level=info msg="StartContainer for \"7a53ad451302c5d3c10e641788b573de6572ab8674dfd354fa9eff6aba44942d\" returns successfully" Sep 4 17:33:16.086170 kubelet[3484]: I0904 17:33:16.085717 3484 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f76555d76-dmhbf" podStartSLOduration=2.404637516 podCreationTimestamp="2024-09-04 17:33:11 +0000 UTC" firstStartedPulling="2024-09-04 17:33:12.425574392 +0000 UTC m=+66.864896548" lastFinishedPulling="2024-09-04 17:33:15.106609228 +0000 UTC m=+69.545931384" observedRunningTime="2024-09-04 17:33:15.966307564 +0000 UTC m=+70.405629820" watchObservedRunningTime="2024-09-04 17:33:16.085672352 +0000 UTC m=+70.524994608" Sep 4 17:33:39.502530 systemd[1]: Started sshd@7-10.200.8.37:22-10.200.16.10:53460.service - OpenSSH per-connection server daemon (10.200.16.10:53460). Sep 4 17:33:40.120642 sshd[5894]: Accepted publickey for core from 10.200.16.10 port 53460 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:33:40.122449 sshd[5894]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:33:40.126609 systemd-logind[1777]: New session 10 of user core. Sep 4 17:33:40.134661 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:33:40.673581 sshd[5894]: pam_unix(sshd:session): session closed for user core Sep 4 17:33:40.676805 systemd[1]: sshd@7-10.200.8.37:22-10.200.16.10:53460.service: Deactivated successfully. Sep 4 17:33:40.682551 systemd-logind[1777]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:33:40.682866 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:33:40.684296 systemd-logind[1777]: Removed session 10. Sep 4 17:33:40.787200 systemd[1]: run-containerd-runc-k8s.io-c0e9a909b1786823607efe3a04ba650417428965e75b6cda196dc11b14362259-runc.oSHY6C.mount: Deactivated successfully. Sep 4 17:33:45.783003 systemd[1]: Started sshd@8-10.200.8.37:22-10.200.16.10:53474.service - OpenSSH per-connection server daemon (10.200.16.10:53474). Sep 4 17:33:46.405955 sshd[5947]: Accepted publickey for core from 10.200.16.10 port 53474 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:33:46.407483 sshd[5947]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:33:46.412338 systemd-logind[1777]: New session 11 of user core. Sep 4 17:33:46.415529 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:33:46.903141 sshd[5947]: pam_unix(sshd:session): session closed for user core Sep 4 17:33:46.906381 systemd[1]: sshd@8-10.200.8.37:22-10.200.16.10:53474.service: Deactivated successfully. Sep 4 17:33:46.912006 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:33:46.913672 systemd-logind[1777]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:33:46.914784 systemd-logind[1777]: Removed session 11. Sep 4 17:33:52.011547 systemd[1]: Started sshd@9-10.200.8.37:22-10.200.16.10:57696.service - OpenSSH per-connection server daemon (10.200.16.10:57696). Sep 4 17:33:52.632320 sshd[5969]: Accepted publickey for core from 10.200.16.10 port 57696 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:33:52.633832 sshd[5969]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:33:52.637968 systemd-logind[1777]: New session 12 of user core. Sep 4 17:33:52.643475 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:33:53.125007 sshd[5969]: pam_unix(sshd:session): session closed for user core Sep 4 17:33:53.129787 systemd[1]: sshd@9-10.200.8.37:22-10.200.16.10:57696.service: Deactivated successfully. Sep 4 17:33:53.133737 systemd-logind[1777]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:33:53.134373 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:33:53.135562 systemd-logind[1777]: Removed session 12. Sep 4 17:33:53.234520 systemd[1]: Started sshd@10-10.200.8.37:22-10.200.16.10:57698.service - OpenSSH per-connection server daemon (10.200.16.10:57698). Sep 4 17:33:53.860084 sshd[5986]: Accepted publickey for core from 10.200.16.10 port 57698 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:33:53.861779 sshd[5986]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:33:53.866567 systemd-logind[1777]: New session 13 of user core. Sep 4 17:33:53.870518 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:33:54.981428 sshd[5986]: pam_unix(sshd:session): session closed for user core Sep 4 17:33:54.986642 systemd[1]: sshd@10-10.200.8.37:22-10.200.16.10:57698.service: Deactivated successfully. Sep 4 17:33:54.990633 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:33:54.991549 systemd-logind[1777]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:33:54.992615 systemd-logind[1777]: Removed session 13. Sep 4 17:33:55.089498 systemd[1]: Started sshd@11-10.200.8.37:22-10.200.16.10:57710.service - OpenSSH per-connection server daemon (10.200.16.10:57710). Sep 4 17:33:55.711299 sshd[5998]: Accepted publickey for core from 10.200.16.10 port 57710 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:33:55.712752 sshd[5998]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:33:55.716892 systemd-logind[1777]: New session 14 of user core. Sep 4 17:33:55.721495 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:33:56.213253 sshd[5998]: pam_unix(sshd:session): session closed for user core Sep 4 17:33:56.216690 systemd[1]: sshd@11-10.200.8.37:22-10.200.16.10:57710.service: Deactivated successfully. Sep 4 17:33:56.222221 systemd-logind[1777]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:33:56.222855 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:33:56.224404 systemd-logind[1777]: Removed session 14. Sep 4 17:34:01.321674 systemd[1]: Started sshd@12-10.200.8.37:22-10.200.16.10:47996.service - OpenSSH per-connection server daemon (10.200.16.10:47996). Sep 4 17:34:01.940543 sshd[6021]: Accepted publickey for core from 10.200.16.10 port 47996 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:34:01.942105 sshd[6021]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:01.946399 systemd-logind[1777]: New session 15 of user core. Sep 4 17:34:01.948554 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:34:02.444880 sshd[6021]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:02.448480 systemd[1]: sshd@12-10.200.8.37:22-10.200.16.10:47996.service: Deactivated successfully. Sep 4 17:34:02.455110 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:34:02.455511 systemd-logind[1777]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:34:02.456887 systemd-logind[1777]: Removed session 15. Sep 4 17:34:07.552523 systemd[1]: Started sshd@13-10.200.8.37:22-10.200.16.10:48002.service - OpenSSH per-connection server daemon (10.200.16.10:48002). Sep 4 17:34:08.174223 sshd[6037]: Accepted publickey for core from 10.200.16.10 port 48002 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:34:08.176025 sshd[6037]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:08.180781 systemd-logind[1777]: New session 16 of user core. Sep 4 17:34:08.186287 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:34:08.679188 sshd[6037]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:08.686051 systemd[1]: sshd@13-10.200.8.37:22-10.200.16.10:48002.service: Deactivated successfully. Sep 4 17:34:08.689505 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:34:08.690506 systemd-logind[1777]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:34:08.691462 systemd-logind[1777]: Removed session 16. Sep 4 17:34:13.789867 systemd[1]: Started sshd@14-10.200.8.37:22-10.200.16.10:38504.service - OpenSSH per-connection server daemon (10.200.16.10:38504). Sep 4 17:34:14.435385 sshd[6097]: Accepted publickey for core from 10.200.16.10 port 38504 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:34:14.437191 sshd[6097]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:14.442835 systemd-logind[1777]: New session 17 of user core. Sep 4 17:34:14.451513 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:34:14.935498 sshd[6097]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:14.940638 systemd[1]: sshd@14-10.200.8.37:22-10.200.16.10:38504.service: Deactivated successfully. Sep 4 17:34:14.944863 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:34:14.945741 systemd-logind[1777]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:34:14.946718 systemd-logind[1777]: Removed session 17. Sep 4 17:34:15.044501 systemd[1]: Started sshd@15-10.200.8.37:22-10.200.16.10:38510.service - OpenSSH per-connection server daemon (10.200.16.10:38510). Sep 4 17:34:15.665200 sshd[6111]: Accepted publickey for core from 10.200.16.10 port 38510 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:34:15.666811 sshd[6111]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:15.670825 systemd-logind[1777]: New session 18 of user core. Sep 4 17:34:15.675573 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:34:16.301217 sshd[6111]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:16.304973 systemd[1]: sshd@15-10.200.8.37:22-10.200.16.10:38510.service: Deactivated successfully. Sep 4 17:34:16.310284 systemd-logind[1777]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:34:16.310798 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:34:16.312519 systemd-logind[1777]: Removed session 18. Sep 4 17:34:16.410595 systemd[1]: Started sshd@16-10.200.8.37:22-10.200.16.10:38516.service - OpenSSH per-connection server daemon (10.200.16.10:38516). Sep 4 17:34:17.032461 sshd[6125]: Accepted publickey for core from 10.200.16.10 port 38516 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:34:17.033922 sshd[6125]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:17.038422 systemd-logind[1777]: New session 19 of user core. Sep 4 17:34:17.041489 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:34:18.447767 sshd[6125]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:18.453259 systemd[1]: sshd@16-10.200.8.37:22-10.200.16.10:38516.service: Deactivated successfully. Sep 4 17:34:18.457465 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:34:18.458409 systemd-logind[1777]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:34:18.459453 systemd-logind[1777]: Removed session 19. Sep 4 17:34:18.557718 systemd[1]: Started sshd@17-10.200.8.37:22-10.200.16.10:50320.service - OpenSSH per-connection server daemon (10.200.16.10:50320). Sep 4 17:34:19.177720 sshd[6150]: Accepted publickey for core from 10.200.16.10 port 50320 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:34:19.179830 sshd[6150]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:19.183982 systemd-logind[1777]: New session 20 of user core. Sep 4 17:34:19.189547 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:34:19.877679 sshd[6150]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:19.883069 systemd[1]: sshd@17-10.200.8.37:22-10.200.16.10:50320.service: Deactivated successfully. Sep 4 17:34:19.887143 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:34:19.888118 systemd-logind[1777]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:34:19.889182 systemd-logind[1777]: Removed session 20. Sep 4 17:34:19.985800 systemd[1]: Started sshd@18-10.200.8.37:22-10.200.16.10:50326.service - OpenSSH per-connection server daemon (10.200.16.10:50326). Sep 4 17:34:20.605041 sshd[6163]: Accepted publickey for core from 10.200.16.10 port 50326 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:34:20.607196 sshd[6163]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:20.616399 systemd-logind[1777]: New session 21 of user core. Sep 4 17:34:20.621053 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:34:21.099849 sshd[6163]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:21.108702 systemd[1]: sshd@18-10.200.8.37:22-10.200.16.10:50326.service: Deactivated successfully. Sep 4 17:34:21.112109 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:34:21.113046 systemd-logind[1777]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:34:21.114078 systemd-logind[1777]: Removed session 21. Sep 4 17:34:26.209591 systemd[1]: Started sshd@19-10.200.8.37:22-10.200.16.10:50342.service - OpenSSH per-connection server daemon (10.200.16.10:50342). Sep 4 17:34:26.850647 sshd[6182]: Accepted publickey for core from 10.200.16.10 port 50342 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:34:26.852528 sshd[6182]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:26.856820 systemd-logind[1777]: New session 22 of user core. Sep 4 17:34:26.860784 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:34:27.354334 sshd[6182]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:27.358532 systemd[1]: sshd@19-10.200.8.37:22-10.200.16.10:50342.service: Deactivated successfully. Sep 4 17:34:27.362953 systemd-logind[1777]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:34:27.363965 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:34:27.365261 systemd-logind[1777]: Removed session 22. Sep 4 17:34:32.460740 systemd[1]: Started sshd@20-10.200.8.37:22-10.200.16.10:41340.service - OpenSSH per-connection server daemon (10.200.16.10:41340). Sep 4 17:34:33.084555 sshd[6219]: Accepted publickey for core from 10.200.16.10 port 41340 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:34:33.086185 sshd[6219]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:33.090370 systemd-logind[1777]: New session 23 of user core. Sep 4 17:34:33.093579 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:34:33.577584 sshd[6219]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:33.581224 systemd[1]: sshd@20-10.200.8.37:22-10.200.16.10:41340.service: Deactivated successfully. Sep 4 17:34:33.587584 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:34:33.588665 systemd-logind[1777]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:34:33.589591 systemd-logind[1777]: Removed session 23. Sep 4 17:34:38.685508 systemd[1]: Started sshd@21-10.200.8.37:22-10.200.16.10:44014.service - OpenSSH per-connection server daemon (10.200.16.10:44014). Sep 4 17:34:39.311917 sshd[6255]: Accepted publickey for core from 10.200.16.10 port 44014 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:34:39.313440 sshd[6255]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:39.320351 systemd-logind[1777]: New session 24 of user core. Sep 4 17:34:39.323708 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:34:39.814790 sshd[6255]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:39.817789 systemd[1]: sshd@21-10.200.8.37:22-10.200.16.10:44014.service: Deactivated successfully. Sep 4 17:34:39.822079 systemd-logind[1777]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:34:39.824134 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:34:39.826347 systemd-logind[1777]: Removed session 24. Sep 4 17:34:44.922539 systemd[1]: Started sshd@22-10.200.8.37:22-10.200.16.10:44026.service - OpenSSH per-connection server daemon (10.200.16.10:44026). Sep 4 17:34:45.543595 sshd[6315]: Accepted publickey for core from 10.200.16.10 port 44026 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:34:45.545386 sshd[6315]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:45.550572 systemd-logind[1777]: New session 25 of user core. Sep 4 17:34:45.557181 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:34:46.039566 sshd[6315]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:46.043793 systemd[1]: sshd@22-10.200.8.37:22-10.200.16.10:44026.service: Deactivated successfully. Sep 4 17:34:46.048778 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:34:46.049654 systemd-logind[1777]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:34:46.050697 systemd-logind[1777]: Removed session 25. Sep 4 17:34:51.147531 systemd[1]: Started sshd@23-10.200.8.37:22-10.200.16.10:52674.service - OpenSSH per-connection server daemon (10.200.16.10:52674). Sep 4 17:34:51.775420 sshd[6335]: Accepted publickey for core from 10.200.16.10 port 52674 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:34:51.776920 sshd[6335]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:51.781084 systemd-logind[1777]: New session 26 of user core. Sep 4 17:34:51.785779 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 17:34:52.301841 sshd[6335]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:52.306267 systemd[1]: sshd@23-10.200.8.37:22-10.200.16.10:52674.service: Deactivated successfully. Sep 4 17:34:52.311170 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 17:34:52.312064 systemd-logind[1777]: Session 26 logged out. Waiting for processes to exit. Sep 4 17:34:52.313071 systemd-logind[1777]: Removed session 26. Sep 4 17:34:57.410577 systemd[1]: Started sshd@24-10.200.8.37:22-10.200.16.10:52690.service - OpenSSH per-connection server daemon (10.200.16.10:52690). Sep 4 17:34:58.031274 sshd[6353]: Accepted publickey for core from 10.200.16.10 port 52690 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:34:58.032802 sshd[6353]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:58.037003 systemd-logind[1777]: New session 27 of user core. Sep 4 17:34:58.044777 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 17:34:58.532995 sshd[6353]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:58.536725 systemd[1]: sshd@24-10.200.8.37:22-10.200.16.10:52690.service: Deactivated successfully. Sep 4 17:34:58.542764 systemd-logind[1777]: Session 27 logged out. Waiting for processes to exit. Sep 4 17:34:58.543051 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 17:34:58.544513 systemd-logind[1777]: Removed session 27.