Sep 4 17:29:00.043449 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:49:08 -00 2024 Sep 4 17:29:00.043505 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:29:00.043526 kernel: BIOS-provided physical RAM map: Sep 4 17:29:00.043538 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 17:29:00.043553 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 4 17:29:00.043567 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Sep 4 17:29:00.043581 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Sep 4 17:29:00.043599 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Sep 4 17:29:00.043612 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Sep 4 17:29:00.043625 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 4 17:29:00.043639 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 4 17:29:00.043652 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 4 17:29:00.043663 kernel: printk: bootconsole [earlyser0] enabled Sep 4 17:29:00.043673 kernel: NX (Execute Disable) protection: active Sep 4 17:29:00.046511 kernel: APIC: Static calls initialized Sep 4 17:29:00.046537 kernel: efi: EFI v2.7 by Microsoft Sep 4 17:29:00.046551 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Sep 4 17:29:00.046564 kernel: SMBIOS 3.1.0 present. Sep 4 17:29:00.046576 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Sep 4 17:29:00.046589 kernel: Hypervisor detected: Microsoft Hyper-V Sep 4 17:29:00.046602 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Sep 4 17:29:00.046614 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Sep 4 17:29:00.046626 kernel: Hyper-V: Nested features: 0x1e0101 Sep 4 17:29:00.046638 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 4 17:29:00.046654 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 4 17:29:00.046667 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 4 17:29:00.046680 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 4 17:29:00.046693 kernel: tsc: Marking TSC unstable due to running on Hyper-V Sep 4 17:29:00.046707 kernel: tsc: Detected 2593.907 MHz processor Sep 4 17:29:00.046719 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 17:29:00.046732 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 17:29:00.046745 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Sep 4 17:29:00.046758 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 4 17:29:00.046772 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 17:29:00.046783 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Sep 4 17:29:00.046793 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Sep 4 17:29:00.046804 kernel: Using GB pages for direct mapping Sep 4 17:29:00.046815 kernel: Secure boot disabled Sep 4 17:29:00.046826 kernel: ACPI: Early table checksum verification disabled Sep 4 17:29:00.046835 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 4 17:29:00.049513 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:00.049530 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:00.049543 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Sep 4 17:29:00.049554 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 4 17:29:00.049567 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:00.049579 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:00.049590 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:00.049605 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:00.049615 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:00.049627 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:00.049639 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:00.049652 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 4 17:29:00.049665 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Sep 4 17:29:00.049678 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 4 17:29:00.049690 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 4 17:29:00.049705 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 4 17:29:00.049718 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 4 17:29:00.049731 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Sep 4 17:29:00.049744 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Sep 4 17:29:00.049755 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 4 17:29:00.049768 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Sep 4 17:29:00.049790 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 4 17:29:00.049800 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 4 17:29:00.049812 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Sep 4 17:29:00.049826 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Sep 4 17:29:00.049835 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Sep 4 17:29:00.049844 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Sep 4 17:29:00.049853 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Sep 4 17:29:00.049862 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Sep 4 17:29:00.049870 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Sep 4 17:29:00.049880 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Sep 4 17:29:00.049888 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Sep 4 17:29:00.049898 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Sep 4 17:29:00.049908 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Sep 4 17:29:00.049919 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Sep 4 17:29:00.049926 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Sep 4 17:29:00.049937 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Sep 4 17:29:00.049945 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Sep 4 17:29:00.049957 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Sep 4 17:29:00.049965 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Sep 4 17:29:00.049975 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Sep 4 17:29:00.049983 kernel: Zone ranges: Sep 4 17:29:00.049996 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 17:29:00.050004 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 4 17:29:00.050015 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 4 17:29:00.050023 kernel: Movable zone start for each node Sep 4 17:29:00.050032 kernel: Early memory node ranges Sep 4 17:29:00.050041 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 4 17:29:00.050050 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Sep 4 17:29:00.050059 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 4 17:29:00.050067 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 4 17:29:00.050079 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 4 17:29:00.050087 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:29:00.050097 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 4 17:29:00.050104 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Sep 4 17:29:00.050115 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 4 17:29:00.050122 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Sep 4 17:29:00.050133 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Sep 4 17:29:00.050141 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 17:29:00.050151 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 17:29:00.050163 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 4 17:29:00.050171 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 4 17:29:00.050180 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 4 17:29:00.050189 kernel: Booting paravirtualized kernel on Hyper-V Sep 4 17:29:00.050197 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 17:29:00.050207 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 17:29:00.050215 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Sep 4 17:29:00.050225 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Sep 4 17:29:00.050233 kernel: pcpu-alloc: [0] 0 1 Sep 4 17:29:00.050245 kernel: Hyper-V: PV spinlocks enabled Sep 4 17:29:00.050253 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 17:29:00.050265 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:29:00.050273 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:29:00.050283 kernel: random: crng init done Sep 4 17:29:00.050291 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 4 17:29:00.050301 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:29:00.050309 kernel: Fallback order for Node 0: 0 Sep 4 17:29:00.050323 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Sep 4 17:29:00.050340 kernel: Policy zone: Normal Sep 4 17:29:00.050349 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:29:00.050362 kernel: software IO TLB: area num 2. Sep 4 17:29:00.050371 kernel: Memory: 8070932K/8387460K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49336K init, 2008K bss, 316268K reserved, 0K cma-reserved) Sep 4 17:29:00.050382 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 17:29:00.050390 kernel: ftrace: allocating 37670 entries in 148 pages Sep 4 17:29:00.050401 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 17:29:00.050409 kernel: Dynamic Preempt: voluntary Sep 4 17:29:00.050420 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:29:00.050429 kernel: rcu: RCU event tracing is enabled. Sep 4 17:29:00.050440 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 17:29:00.050451 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:29:00.050460 kernel: Rude variant of Tasks RCU enabled. Sep 4 17:29:00.050469 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:29:00.050478 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:29:00.050515 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 17:29:00.050525 kernel: Using NULL legacy PIC Sep 4 17:29:00.050537 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 4 17:29:00.050546 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:29:00.050554 kernel: Console: colour dummy device 80x25 Sep 4 17:29:00.050561 kernel: printk: console [tty1] enabled Sep 4 17:29:00.050569 kernel: printk: console [ttyS0] enabled Sep 4 17:29:00.050577 kernel: printk: bootconsole [earlyser0] disabled Sep 4 17:29:00.050585 kernel: ACPI: Core revision 20230628 Sep 4 17:29:00.050593 kernel: Failed to register legacy timer interrupt Sep 4 17:29:00.050604 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 17:29:00.050611 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 4 17:29:00.050619 kernel: Hyper-V: Using IPI hypercalls Sep 4 17:29:00.050627 kernel: APIC: send_IPI() replaced with hv_send_ipi() Sep 4 17:29:00.050635 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Sep 4 17:29:00.050643 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Sep 4 17:29:00.050655 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Sep 4 17:29:00.050663 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Sep 4 17:29:00.050674 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Sep 4 17:29:00.050685 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Sep 4 17:29:00.050696 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 4 17:29:00.050704 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Sep 4 17:29:00.050715 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 17:29:00.050722 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 17:29:00.050733 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 17:29:00.050741 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 4 17:29:00.050752 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 4 17:29:00.050760 kernel: RETBleed: Vulnerable Sep 4 17:29:00.050772 kernel: Speculative Store Bypass: Vulnerable Sep 4 17:29:00.050780 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 17:29:00.050792 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 17:29:00.050800 kernel: GDS: Unknown: Dependent on hypervisor status Sep 4 17:29:00.050811 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 17:29:00.050819 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 17:29:00.050830 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 17:29:00.050838 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 4 17:29:00.050849 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 4 17:29:00.050858 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 4 17:29:00.050869 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 17:29:00.050879 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 4 17:29:00.050893 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 4 17:29:00.050901 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 4 17:29:00.050912 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Sep 4 17:29:00.050923 kernel: Freeing SMP alternatives memory: 32K Sep 4 17:29:00.050931 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:29:00.050941 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Sep 4 17:29:00.050950 kernel: SELinux: Initializing. Sep 4 17:29:00.050960 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 17:29:00.050969 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 17:29:00.050979 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 4 17:29:00.050988 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:29:00.051001 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:29:00.051009 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:29:00.051022 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 4 17:29:00.051030 kernel: signal: max sigframe size: 3632 Sep 4 17:29:00.051041 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:29:00.051049 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:29:00.051061 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 4 17:29:00.051069 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:29:00.051080 kernel: smpboot: x86: Booting SMP configuration: Sep 4 17:29:00.051090 kernel: .... node #0, CPUs: #1 Sep 4 17:29:00.051101 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Sep 4 17:29:00.051110 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 4 17:29:00.051121 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 17:29:00.051129 kernel: smpboot: Max logical packages: 1 Sep 4 17:29:00.051140 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Sep 4 17:29:00.051148 kernel: devtmpfs: initialized Sep 4 17:29:00.051159 kernel: x86/mm: Memory block size: 128MB Sep 4 17:29:00.051170 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 4 17:29:00.051181 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:29:00.051190 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 17:29:00.051197 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:29:00.051208 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:29:00.051217 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:29:00.051226 kernel: audit: type=2000 audit(1725470938.027:1): state=initialized audit_enabled=0 res=1 Sep 4 17:29:00.051236 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:29:00.051247 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 17:29:00.051262 kernel: cpuidle: using governor menu Sep 4 17:29:00.051276 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:29:00.051297 kernel: dca service started, version 1.12.1 Sep 4 17:29:00.051315 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Sep 4 17:29:00.051331 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 17:29:00.051349 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:29:00.051366 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:29:00.051381 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:29:00.051396 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:29:00.051417 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:29:00.051435 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:29:00.051456 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:29:00.051473 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:29:00.051502 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:29:00.051521 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 17:29:00.051536 kernel: ACPI: Interpreter enabled Sep 4 17:29:00.051551 kernel: ACPI: PM: (supports S0 S5) Sep 4 17:29:00.051568 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 17:29:00.051592 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 17:29:00.051608 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 4 17:29:00.051623 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 4 17:29:00.051637 kernel: iommu: Default domain type: Translated Sep 4 17:29:00.051656 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 17:29:00.051671 kernel: efivars: Registered efivars operations Sep 4 17:29:00.051687 kernel: PCI: Using ACPI for IRQ routing Sep 4 17:29:00.051704 kernel: PCI: System does not support PCI Sep 4 17:29:00.051722 kernel: vgaarb: loaded Sep 4 17:29:00.051746 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Sep 4 17:29:00.051763 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:29:00.051781 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:29:00.051798 kernel: pnp: PnP ACPI init Sep 4 17:29:00.051814 kernel: pnp: PnP ACPI: found 3 devices Sep 4 17:29:00.051833 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 17:29:00.051852 kernel: NET: Registered PF_INET protocol family Sep 4 17:29:00.051870 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 17:29:00.051887 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 4 17:29:00.051908 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:29:00.051923 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:29:00.051938 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 4 17:29:00.051954 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 4 17:29:00.051972 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 17:29:00.051989 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 17:29:00.052002 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:29:00.055517 kernel: NET: Registered PF_XDP protocol family Sep 4 17:29:00.055536 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:29:00.055556 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 4 17:29:00.055571 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Sep 4 17:29:00.055585 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 4 17:29:00.055599 kernel: Initialise system trusted keyrings Sep 4 17:29:00.055612 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 4 17:29:00.055626 kernel: Key type asymmetric registered Sep 4 17:29:00.055639 kernel: Asymmetric key parser 'x509' registered Sep 4 17:29:00.055653 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 17:29:00.055666 kernel: io scheduler mq-deadline registered Sep 4 17:29:00.055684 kernel: io scheduler kyber registered Sep 4 17:29:00.055697 kernel: io scheduler bfq registered Sep 4 17:29:00.055711 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 17:29:00.055725 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:29:00.055738 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 17:29:00.055752 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 4 17:29:00.055765 kernel: i8042: PNP: No PS/2 controller found. Sep 4 17:29:00.055941 kernel: rtc_cmos 00:02: registered as rtc0 Sep 4 17:29:00.056212 kernel: rtc_cmos 00:02: setting system clock to 2024-09-04T17:28:59 UTC (1725470939) Sep 4 17:29:00.056348 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 4 17:29:00.056367 kernel: intel_pstate: CPU model not supported Sep 4 17:29:00.056382 kernel: efifb: probing for efifb Sep 4 17:29:00.056396 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 4 17:29:00.056409 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 4 17:29:00.056423 kernel: efifb: scrolling: redraw Sep 4 17:29:00.056436 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 4 17:29:00.056455 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 17:29:00.056469 kernel: fb0: EFI VGA frame buffer device Sep 4 17:29:00.056488 kernel: pstore: Using crash dump compression: deflate Sep 4 17:29:00.056511 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 17:29:00.057257 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:29:00.057275 kernel: Segment Routing with IPv6 Sep 4 17:29:00.057289 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:29:00.057304 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:29:00.057317 kernel: Key type dns_resolver registered Sep 4 17:29:00.057331 kernel: IPI shorthand broadcast: enabled Sep 4 17:29:00.057350 kernel: sched_clock: Marking stable (760002900, 38431400)->(964922300, -166488000) Sep 4 17:29:00.057363 kernel: registered taskstats version 1 Sep 4 17:29:00.057377 kernel: Loading compiled-in X.509 certificates Sep 4 17:29:00.057391 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: a53bb4e7e3319f75620f709d8a6c7aef0adb3b02' Sep 4 17:29:00.057404 kernel: Key type .fscrypt registered Sep 4 17:29:00.057418 kernel: Key type fscrypt-provisioning registered Sep 4 17:29:00.057431 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:29:00.057445 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:29:00.057461 kernel: ima: No architecture policies found Sep 4 17:29:00.057474 kernel: clk: Disabling unused clocks Sep 4 17:29:00.057501 kernel: Freeing unused kernel image (initmem) memory: 49336K Sep 4 17:29:00.057515 kernel: Write protecting the kernel read-only data: 36864k Sep 4 17:29:00.057528 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Sep 4 17:29:00.057541 kernel: Run /init as init process Sep 4 17:29:00.057554 kernel: with arguments: Sep 4 17:29:00.057567 kernel: /init Sep 4 17:29:00.057581 kernel: with environment: Sep 4 17:29:00.057595 kernel: HOME=/ Sep 4 17:29:00.057608 kernel: TERM=linux Sep 4 17:29:00.057622 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:29:00.057637 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:29:00.057653 systemd[1]: Detected virtualization microsoft. Sep 4 17:29:00.057668 systemd[1]: Detected architecture x86-64. Sep 4 17:29:00.057682 systemd[1]: Running in initrd. Sep 4 17:29:00.057695 systemd[1]: No hostname configured, using default hostname. Sep 4 17:29:00.057711 systemd[1]: Hostname set to . Sep 4 17:29:00.057726 systemd[1]: Initializing machine ID from random generator. Sep 4 17:29:00.057739 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:29:00.057755 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:29:00.057771 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:29:00.057788 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:29:00.057801 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:29:00.057816 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:29:00.057835 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:29:00.057854 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:29:00.057868 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:29:00.057882 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:29:00.057895 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:29:00.057909 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:29:00.057922 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:29:00.057938 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:29:00.057955 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:29:00.057970 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:29:00.057984 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:29:00.057997 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:29:00.058012 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:29:00.058027 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:29:00.058041 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:29:00.058059 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:29:00.058071 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:29:00.058085 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:29:00.058098 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:29:00.058112 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:29:00.058126 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:29:00.058141 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:29:00.058155 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:29:00.058169 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:00.058188 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:29:00.058229 systemd-journald[176]: Collecting audit messages is disabled. Sep 4 17:29:00.058264 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:29:00.058280 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:29:00.058302 systemd-journald[176]: Journal started Sep 4 17:29:00.058350 systemd-journald[176]: Runtime Journal (/run/log/journal/7748a771a5cb4db595b20ab08accff8e) is 8.0M, max 158.8M, 150.8M free. Sep 4 17:29:00.052361 systemd-modules-load[177]: Inserted module 'overlay' Sep 4 17:29:00.071935 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:29:00.078509 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:29:00.083313 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:00.088589 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:29:00.110798 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:29:00.119515 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:29:00.119551 kernel: Bridge firewalling registered Sep 4 17:29:00.118731 systemd-modules-load[177]: Inserted module 'br_netfilter' Sep 4 17:29:00.121726 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:29:00.134666 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:29:00.137111 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:29:00.145526 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:29:00.151152 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:29:00.162033 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:29:00.176812 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:29:00.182802 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:29:00.189726 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:29:00.201382 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:29:00.211733 dracut-cmdline[209]: dracut-dracut-053 Sep 4 17:29:00.216503 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:29:00.256018 systemd-resolved[213]: Positive Trust Anchors: Sep 4 17:29:00.256040 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:29:00.256083 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:29:00.278443 systemd-resolved[213]: Defaulting to hostname 'linux'. Sep 4 17:29:00.281768 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:29:00.284469 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:29:00.312512 kernel: SCSI subsystem initialized Sep 4 17:29:00.323507 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:29:00.336512 kernel: iscsi: registered transport (tcp) Sep 4 17:29:00.361527 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:29:00.361610 kernel: QLogic iSCSI HBA Driver Sep 4 17:29:00.397512 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:29:00.411660 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:29:00.441638 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:29:00.441718 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:29:00.444721 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:29:00.488511 kernel: raid6: avx512x4 gen() 18524 MB/s Sep 4 17:29:00.507508 kernel: raid6: avx512x2 gen() 18470 MB/s Sep 4 17:29:00.526501 kernel: raid6: avx512x1 gen() 18389 MB/s Sep 4 17:29:00.545500 kernel: raid6: avx2x4 gen() 18360 MB/s Sep 4 17:29:00.564503 kernel: raid6: avx2x2 gen() 18370 MB/s Sep 4 17:29:00.584095 kernel: raid6: avx2x1 gen() 13899 MB/s Sep 4 17:29:00.584129 kernel: raid6: using algorithm avx512x4 gen() 18524 MB/s Sep 4 17:29:00.605864 kernel: raid6: .... xor() 6883 MB/s, rmw enabled Sep 4 17:29:00.605900 kernel: raid6: using avx512x2 recovery algorithm Sep 4 17:29:00.631516 kernel: xor: automatically using best checksumming function avx Sep 4 17:29:00.798520 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:29:00.808388 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:29:00.816679 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:29:00.834662 systemd-udevd[396]: Using default interface naming scheme 'v255'. Sep 4 17:29:00.840688 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:29:00.850184 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:29:00.866990 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Sep 4 17:29:00.894307 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:29:00.902770 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:29:00.942151 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:29:00.960889 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:29:00.977486 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:29:00.988189 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:29:00.996820 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:29:01.003093 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:29:01.013674 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:29:01.024541 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 17:29:01.046916 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:29:01.060631 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 17:29:01.060695 kernel: AES CTR mode by8 optimization enabled Sep 4 17:29:01.069988 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:29:01.070152 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:29:01.076232 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:29:01.077115 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:29:01.077252 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:01.078274 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:01.102035 kernel: hv_vmbus: Vmbus version:5.2 Sep 4 17:29:01.101158 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:01.123213 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:29:01.123332 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:01.144520 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 4 17:29:01.144578 kernel: hv_vmbus: registering driver hv_storvsc Sep 4 17:29:01.145261 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:01.150909 kernel: scsi host0: storvsc_host_t Sep 4 17:29:01.151124 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 4 17:29:01.156528 kernel: scsi host1: storvsc_host_t Sep 4 17:29:01.160692 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 4 17:29:01.160801 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 4 17:29:01.164881 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 4 17:29:01.171453 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 4 17:29:01.196885 kernel: hv_vmbus: registering driver hv_netvsc Sep 4 17:29:01.196945 kernel: PTP clock support registered Sep 4 17:29:01.197860 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:01.214649 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:29:01.232814 kernel: hv_utils: Registering HyperV Utility Driver Sep 4 17:29:01.232892 kernel: hv_vmbus: registering driver hv_utils Sep 4 17:29:01.238523 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 4 17:29:01.238797 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 17:29:01.238815 kernel: hv_utils: Heartbeat IC version 3.0 Sep 4 17:29:01.242065 kernel: hv_utils: Shutdown IC version 3.2 Sep 4 17:29:01.243897 kernel: hv_utils: TimeSync IC version 4.0 Sep 4 17:29:01.895252 systemd-resolved[213]: Clock change detected. Flushing caches. Sep 4 17:29:01.905171 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 17:29:01.914177 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 4 17:29:01.915715 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:29:01.923175 kernel: hv_vmbus: registering driver hid_hyperv Sep 4 17:29:01.930008 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 4 17:29:01.930053 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 4 17:29:01.945412 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 4 17:29:01.945666 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 4 17:29:01.947924 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 4 17:29:01.948152 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 4 17:29:01.953200 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 4 17:29:01.957200 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:29:01.960186 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 4 17:29:02.055121 kernel: hv_netvsc 000d3a65-7ef1-000d-3a65-7ef1000d3a65 eth0: VF slot 1 added Sep 4 17:29:02.064079 kernel: hv_vmbus: registering driver hv_pci Sep 4 17:29:02.064127 kernel: hv_pci 805f6e05-8239-44a1-8b72-ae0c1cd31623: PCI VMBus probing: Using version 0x10004 Sep 4 17:29:02.071174 kernel: hv_pci 805f6e05-8239-44a1-8b72-ae0c1cd31623: PCI host bridge to bus 8239:00 Sep 4 17:29:02.071335 kernel: pci_bus 8239:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Sep 4 17:29:02.075084 kernel: pci_bus 8239:00: No busn resource found for root bus, will use [bus 00-ff] Sep 4 17:29:02.079324 kernel: pci 8239:00:02.0: [15b3:1016] type 00 class 0x020000 Sep 4 17:29:02.083200 kernel: pci 8239:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 4 17:29:02.086428 kernel: pci 8239:00:02.0: enabling Extended Tags Sep 4 17:29:02.096171 kernel: pci 8239:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 8239:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Sep 4 17:29:02.101254 kernel: pci_bus 8239:00: busn_res: [bus 00-ff] end is updated to 00 Sep 4 17:29:02.101547 kernel: pci 8239:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 4 17:29:02.286218 kernel: mlx5_core 8239:00:02.0: enabling device (0000 -> 0002) Sep 4 17:29:02.290193 kernel: mlx5_core 8239:00:02.0: firmware version: 14.30.1284 Sep 4 17:29:02.515346 kernel: hv_netvsc 000d3a65-7ef1-000d-3a65-7ef1000d3a65 eth0: VF registering: eth1 Sep 4 17:29:02.515718 kernel: mlx5_core 8239:00:02.0 eth1: joined to eth0 Sep 4 17:29:02.520181 kernel: mlx5_core 8239:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 4 17:29:02.528181 kernel: mlx5_core 8239:00:02.0 enP33337s1: renamed from eth1 Sep 4 17:29:02.802893 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 4 17:29:02.882189 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (465) Sep 4 17:29:02.904069 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 4 17:29:02.937135 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 4 17:29:02.992183 kernel: BTRFS: device fsid d110be6f-93a3-451a-b365-11b5d04e0602 devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (447) Sep 4 17:29:03.006370 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 4 17:29:03.011659 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 4 17:29:03.028315 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:29:03.040177 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:29:03.047181 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:29:04.056467 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:29:04.056537 disk-uuid[603]: The operation has completed successfully. Sep 4 17:29:04.154095 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:29:04.154234 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:29:04.171337 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:29:04.176618 sh[689]: Success Sep 4 17:29:04.212180 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 4 17:29:04.422453 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:29:04.435228 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:29:04.439741 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:29:04.457490 kernel: BTRFS info (device dm-0): first mount of filesystem d110be6f-93a3-451a-b365-11b5d04e0602 Sep 4 17:29:04.457552 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:29:04.460653 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:29:04.463103 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:29:04.465416 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:29:04.913059 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:29:04.915023 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:29:04.923412 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:29:04.928533 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:29:04.950125 kernel: BTRFS info (device sda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:29:04.950203 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:29:04.952823 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:29:04.996222 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:29:05.012216 kernel: BTRFS info (device sda6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:29:05.011767 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:29:05.015794 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:29:05.022432 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:29:05.031388 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:29:05.041299 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:29:05.059239 systemd-networkd[870]: lo: Link UP Sep 4 17:29:05.059250 systemd-networkd[870]: lo: Gained carrier Sep 4 17:29:05.064120 systemd-networkd[870]: Enumeration completed Sep 4 17:29:05.064228 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:29:05.066728 systemd[1]: Reached target network.target - Network. Sep 4 17:29:05.069548 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:29:05.069552 systemd-networkd[870]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:29:05.130191 kernel: mlx5_core 8239:00:02.0 enP33337s1: Link up Sep 4 17:29:05.168191 kernel: hv_netvsc 000d3a65-7ef1-000d-3a65-7ef1000d3a65 eth0: Data path switched to VF: enP33337s1 Sep 4 17:29:05.169235 systemd-networkd[870]: enP33337s1: Link UP Sep 4 17:29:05.169411 systemd-networkd[870]: eth0: Link UP Sep 4 17:29:05.169635 systemd-networkd[870]: eth0: Gained carrier Sep 4 17:29:05.169658 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:29:05.180392 systemd-networkd[870]: enP33337s1: Gained carrier Sep 4 17:29:05.204216 systemd-networkd[870]: eth0: DHCPv4 address 10.200.8.34/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 4 17:29:06.125043 ignition[873]: Ignition 2.18.0 Sep 4 17:29:06.125056 ignition[873]: Stage: fetch-offline Sep 4 17:29:06.125120 ignition[873]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:06.125134 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:29:06.125340 ignition[873]: parsed url from cmdline: "" Sep 4 17:29:06.125346 ignition[873]: no config URL provided Sep 4 17:29:06.125354 ignition[873]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:29:06.125365 ignition[873]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:29:06.125372 ignition[873]: failed to fetch config: resource requires networking Sep 4 17:29:06.128419 ignition[873]: Ignition finished successfully Sep 4 17:29:06.143328 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:29:06.152328 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 17:29:06.166864 ignition[882]: Ignition 2.18.0 Sep 4 17:29:06.166874 ignition[882]: Stage: fetch Sep 4 17:29:06.167096 ignition[882]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:06.167108 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:29:06.167227 ignition[882]: parsed url from cmdline: "" Sep 4 17:29:06.167232 ignition[882]: no config URL provided Sep 4 17:29:06.167237 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:29:06.167247 ignition[882]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:29:06.167269 ignition[882]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 4 17:29:06.249287 ignition[882]: GET result: OK Sep 4 17:29:06.249465 ignition[882]: config has been read from IMDS userdata Sep 4 17:29:06.249509 ignition[882]: parsing config with SHA512: 9191f7757c2c772b4cbd699f6ec5e79f4b4c6898404e0a42eca9284f632d7085df0e58db39e5dd1f8b348bc16bbeb297af57507038303ea6b407d2cb7df12e40 Sep 4 17:29:06.255419 unknown[882]: fetched base config from "system" Sep 4 17:29:06.256261 ignition[882]: fetch: fetch complete Sep 4 17:29:06.255445 unknown[882]: fetched base config from "system" Sep 4 17:29:06.256268 ignition[882]: fetch: fetch passed Sep 4 17:29:06.255461 unknown[882]: fetched user config from "azure" Sep 4 17:29:06.256329 ignition[882]: Ignition finished successfully Sep 4 17:29:06.258053 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 17:29:06.276350 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:29:06.293318 ignition[890]: Ignition 2.18.0 Sep 4 17:29:06.293329 ignition[890]: Stage: kargs Sep 4 17:29:06.293565 ignition[890]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:06.296633 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:29:06.293578 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:29:06.294501 ignition[890]: kargs: kargs passed Sep 4 17:29:06.294550 ignition[890]: Ignition finished successfully Sep 4 17:29:06.310469 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:29:06.323854 ignition[897]: Ignition 2.18.0 Sep 4 17:29:06.323864 ignition[897]: Stage: disks Sep 4 17:29:06.324104 ignition[897]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:06.326030 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:29:06.324118 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:29:06.329451 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:29:06.325039 ignition[897]: disks: disks passed Sep 4 17:29:06.333341 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:29:06.325086 ignition[897]: Ignition finished successfully Sep 4 17:29:06.347503 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:29:06.349769 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:29:06.354024 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:29:06.366339 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:29:06.422316 systemd-networkd[870]: eth0: Gained IPv6LL Sep 4 17:29:06.471310 systemd-fsck[906]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 4 17:29:06.476881 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:29:06.489377 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:29:06.593192 kernel: EXT4-fs (sda9): mounted filesystem 84a5cefa-c3c7-47d7-9305-7e6877f73628 r/w with ordered data mode. Quota mode: none. Sep 4 17:29:06.593710 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:29:06.596012 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:29:06.658247 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:29:06.662499 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:29:06.678055 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 4 17:29:06.680855 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (917) Sep 4 17:29:06.683792 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:29:06.697553 kernel: BTRFS info (device sda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:29:06.697594 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:29:06.697615 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:29:06.683848 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:29:06.700540 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:29:06.706000 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:29:06.708409 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:29:06.713666 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:29:06.870401 systemd-networkd[870]: enP33337s1: Gained IPv6LL Sep 4 17:29:07.432857 coreos-metadata[919]: Sep 04 17:29:07.432 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 4 17:29:07.437324 coreos-metadata[919]: Sep 04 17:29:07.435 INFO Fetch successful Sep 4 17:29:07.437324 coreos-metadata[919]: Sep 04 17:29:07.435 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 4 17:29:07.448211 coreos-metadata[919]: Sep 04 17:29:07.448 INFO Fetch successful Sep 4 17:29:07.467906 coreos-metadata[919]: Sep 04 17:29:07.467 INFO wrote hostname ci-3975.2.1-a-27f7f2cbdf to /sysroot/etc/hostname Sep 4 17:29:07.470221 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 17:29:07.665766 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:29:07.720706 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:29:07.727490 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:29:07.732236 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:29:09.045697 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:29:09.056274 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:29:09.061539 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:29:09.073545 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:29:09.078724 kernel: BTRFS info (device sda6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:29:09.099815 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:29:09.107601 ignition[1040]: INFO : Ignition 2.18.0 Sep 4 17:29:09.107601 ignition[1040]: INFO : Stage: mount Sep 4 17:29:09.113539 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:09.113539 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:29:09.113539 ignition[1040]: INFO : mount: mount passed Sep 4 17:29:09.113539 ignition[1040]: INFO : Ignition finished successfully Sep 4 17:29:09.109606 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:29:09.122224 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:29:09.129314 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:29:09.143176 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1052) Sep 4 17:29:09.147174 kernel: BTRFS info (device sda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:29:09.147208 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:29:09.151438 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:29:09.157180 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:29:09.158106 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:29:09.179550 ignition[1068]: INFO : Ignition 2.18.0 Sep 4 17:29:09.179550 ignition[1068]: INFO : Stage: files Sep 4 17:29:09.183321 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:09.183321 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:29:09.183321 ignition[1068]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:29:09.183321 ignition[1068]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:29:09.183321 ignition[1068]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:29:09.283704 ignition[1068]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:29:09.287956 ignition[1068]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:29:09.291641 unknown[1068]: wrote ssh authorized keys file for user: core Sep 4 17:29:09.293844 ignition[1068]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:29:09.338145 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:29:09.343630 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 17:29:09.684183 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:29:09.804238 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:29:09.809767 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:29:09.809767 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:29:09.809767 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:29:09.809767 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:29:09.809767 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:29:09.809767 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:29:09.809767 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:29:09.809767 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:29:09.809767 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:29:09.809767 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:29:09.809767 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:29:09.809767 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:29:09.809767 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:29:09.809767 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Sep 4 17:29:10.319006 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 17:29:10.645898 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:29:10.645898 ignition[1068]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 17:29:10.676035 ignition[1068]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:29:10.681700 ignition[1068]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:29:10.681700 ignition[1068]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 17:29:10.688750 ignition[1068]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:29:10.688750 ignition[1068]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:29:10.695017 ignition[1068]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:29:10.698870 ignition[1068]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:29:10.704724 ignition[1068]: INFO : files: files passed Sep 4 17:29:10.704724 ignition[1068]: INFO : Ignition finished successfully Sep 4 17:29:10.700971 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:29:10.713987 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:29:10.719313 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:29:10.722032 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:29:10.722141 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:29:10.736819 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:29:10.745502 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:29:10.745502 initrd-setup-root-after-ignition[1098]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:29:10.740894 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:29:10.754631 initrd-setup-root-after-ignition[1102]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:29:10.758329 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:29:10.783806 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:29:10.783929 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:29:10.789122 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:29:10.793917 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:29:10.798048 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:29:10.806368 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:29:10.820587 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:29:10.829306 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:29:10.839323 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:29:10.844438 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:29:10.849316 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:29:10.853268 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:29:10.853408 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:29:10.860246 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:29:10.862445 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:29:10.866282 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:29:10.868718 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:29:10.877320 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:29:10.882102 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:29:10.884351 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:29:10.891964 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:29:10.894269 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:29:10.898547 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:29:10.903940 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:29:10.904119 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:29:10.910412 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:29:10.915169 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:29:10.917666 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:29:10.919969 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:29:10.922603 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:29:10.924733 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:29:10.934061 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:29:10.934221 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:29:10.941791 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:29:10.941919 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:29:10.947903 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 4 17:29:10.948035 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 17:29:10.962364 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:29:10.967412 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:29:10.973479 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:29:10.973649 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:29:10.974505 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:29:10.974603 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:29:10.982706 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:29:10.982799 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:29:10.997891 ignition[1122]: INFO : Ignition 2.18.0 Sep 4 17:29:10.997891 ignition[1122]: INFO : Stage: umount Sep 4 17:29:10.997891 ignition[1122]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:10.997891 ignition[1122]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:29:10.997891 ignition[1122]: INFO : umount: umount passed Sep 4 17:29:10.997891 ignition[1122]: INFO : Ignition finished successfully Sep 4 17:29:11.001098 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:29:11.001254 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:29:11.004232 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:29:11.004332 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:29:11.011283 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:29:11.011332 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:29:11.015121 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 17:29:11.015183 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 17:29:11.018968 systemd[1]: Stopped target network.target - Network. Sep 4 17:29:11.019724 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:29:11.019777 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:29:11.020096 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:29:11.020424 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:29:11.027201 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:29:11.058450 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:29:11.060473 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:29:11.066642 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:29:11.066706 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:29:11.068925 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:29:11.068973 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:29:11.072843 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:29:11.074701 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:29:11.084006 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:29:11.084066 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:29:11.086449 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:29:11.092801 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:29:11.097203 systemd-networkd[870]: eth0: DHCPv6 lease lost Sep 4 17:29:11.098951 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:29:11.100096 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:29:11.100208 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:29:11.106239 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:29:11.106309 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:29:11.118519 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:29:11.120596 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:29:11.122782 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:29:11.125785 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:29:11.131874 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:29:11.131969 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:29:11.145561 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:29:11.145672 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:29:11.151622 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:29:11.152697 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:29:11.158736 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:29:11.158790 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:29:11.163853 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:29:11.163989 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:29:11.170204 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:29:11.170291 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:29:11.174571 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:29:11.174628 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:29:11.180029 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:29:11.180077 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:29:11.186801 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:29:11.186845 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:29:11.206643 kernel: hv_netvsc 000d3a65-7ef1-000d-3a65-7ef1000d3a65 eth0: Data path switched from VF: enP33337s1 Sep 4 17:29:11.191392 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:29:11.191434 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:29:11.214352 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:29:11.219171 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:29:11.219369 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:29:11.227413 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 17:29:11.227482 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:29:11.230235 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:29:11.230297 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:29:11.233452 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:29:11.233498 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:11.250888 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:29:11.251019 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:29:11.255321 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:29:11.255404 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:29:11.616860 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:29:11.617021 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:29:11.622086 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:29:11.625929 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:29:11.625995 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:29:11.640381 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:29:12.089379 systemd[1]: Switching root. Sep 4 17:29:12.115025 systemd-journald[176]: Journal stopped Sep 4 17:29:00.043449 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:49:08 -00 2024 Sep 4 17:29:00.043505 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:29:00.043526 kernel: BIOS-provided physical RAM map: Sep 4 17:29:00.043538 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 17:29:00.043553 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 4 17:29:00.043567 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Sep 4 17:29:00.043581 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Sep 4 17:29:00.043599 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Sep 4 17:29:00.043612 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Sep 4 17:29:00.043625 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 4 17:29:00.043639 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 4 17:29:00.043652 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 4 17:29:00.043663 kernel: printk: bootconsole [earlyser0] enabled Sep 4 17:29:00.043673 kernel: NX (Execute Disable) protection: active Sep 4 17:29:00.046511 kernel: APIC: Static calls initialized Sep 4 17:29:00.046537 kernel: efi: EFI v2.7 by Microsoft Sep 4 17:29:00.046551 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Sep 4 17:29:00.046564 kernel: SMBIOS 3.1.0 present. Sep 4 17:29:00.046576 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Sep 4 17:29:00.046589 kernel: Hypervisor detected: Microsoft Hyper-V Sep 4 17:29:00.046602 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Sep 4 17:29:00.046614 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Sep 4 17:29:00.046626 kernel: Hyper-V: Nested features: 0x1e0101 Sep 4 17:29:00.046638 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 4 17:29:00.046654 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 4 17:29:00.046667 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 4 17:29:00.046680 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 4 17:29:00.046693 kernel: tsc: Marking TSC unstable due to running on Hyper-V Sep 4 17:29:00.046707 kernel: tsc: Detected 2593.907 MHz processor Sep 4 17:29:00.046719 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 17:29:00.046732 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 17:29:00.046745 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Sep 4 17:29:00.046758 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 4 17:29:00.046772 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 17:29:00.046783 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Sep 4 17:29:00.046793 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Sep 4 17:29:00.046804 kernel: Using GB pages for direct mapping Sep 4 17:29:00.046815 kernel: Secure boot disabled Sep 4 17:29:00.046826 kernel: ACPI: Early table checksum verification disabled Sep 4 17:29:00.046835 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 4 17:29:00.049513 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:00.049530 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:00.049543 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Sep 4 17:29:00.049554 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 4 17:29:00.049567 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:00.049579 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:00.049590 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:00.049605 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:00.049615 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:00.049627 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:00.049639 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:29:00.049652 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 4 17:29:00.049665 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Sep 4 17:29:00.049678 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 4 17:29:00.049690 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 4 17:29:00.049705 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 4 17:29:00.049718 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 4 17:29:00.049731 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Sep 4 17:29:00.049744 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Sep 4 17:29:00.049755 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 4 17:29:00.049768 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Sep 4 17:29:00.049790 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 4 17:29:00.049800 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 4 17:29:00.049812 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Sep 4 17:29:00.049826 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Sep 4 17:29:00.049835 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Sep 4 17:29:00.049844 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Sep 4 17:29:00.049853 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Sep 4 17:29:00.049862 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Sep 4 17:29:00.049870 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Sep 4 17:29:00.049880 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Sep 4 17:29:00.049888 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Sep 4 17:29:00.049898 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Sep 4 17:29:00.049908 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Sep 4 17:29:00.049919 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Sep 4 17:29:00.049926 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Sep 4 17:29:00.049937 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Sep 4 17:29:00.049945 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Sep 4 17:29:00.049957 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Sep 4 17:29:00.049965 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Sep 4 17:29:00.049975 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Sep 4 17:29:00.049983 kernel: Zone ranges: Sep 4 17:29:00.049996 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 17:29:00.050004 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 4 17:29:00.050015 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 4 17:29:00.050023 kernel: Movable zone start for each node Sep 4 17:29:00.050032 kernel: Early memory node ranges Sep 4 17:29:00.050041 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 4 17:29:00.050050 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Sep 4 17:29:00.050059 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 4 17:29:00.050067 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 4 17:29:00.050079 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 4 17:29:00.050087 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:29:00.050097 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 4 17:29:00.050104 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Sep 4 17:29:00.050115 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 4 17:29:00.050122 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Sep 4 17:29:00.050133 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Sep 4 17:29:00.050141 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 17:29:00.050151 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 17:29:00.050163 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 4 17:29:00.050171 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 4 17:29:00.050180 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 4 17:29:00.050189 kernel: Booting paravirtualized kernel on Hyper-V Sep 4 17:29:00.050197 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 17:29:00.050207 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 17:29:00.050215 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Sep 4 17:29:00.050225 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Sep 4 17:29:00.050233 kernel: pcpu-alloc: [0] 0 1 Sep 4 17:29:00.050245 kernel: Hyper-V: PV spinlocks enabled Sep 4 17:29:00.050253 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 17:29:00.050265 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:29:00.050273 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:29:00.050283 kernel: random: crng init done Sep 4 17:29:00.050291 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 4 17:29:00.050301 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:29:00.050309 kernel: Fallback order for Node 0: 0 Sep 4 17:29:00.050323 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Sep 4 17:29:00.050340 kernel: Policy zone: Normal Sep 4 17:29:00.050349 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:29:00.050362 kernel: software IO TLB: area num 2. Sep 4 17:29:00.050371 kernel: Memory: 8070932K/8387460K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49336K init, 2008K bss, 316268K reserved, 0K cma-reserved) Sep 4 17:29:00.050382 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 17:29:00.050390 kernel: ftrace: allocating 37670 entries in 148 pages Sep 4 17:29:00.050401 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 17:29:00.050409 kernel: Dynamic Preempt: voluntary Sep 4 17:29:00.050420 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:29:00.050429 kernel: rcu: RCU event tracing is enabled. Sep 4 17:29:00.050440 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 17:29:00.050451 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:29:00.050460 kernel: Rude variant of Tasks RCU enabled. Sep 4 17:29:00.050469 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:29:00.050478 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:29:00.050515 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 17:29:00.050525 kernel: Using NULL legacy PIC Sep 4 17:29:00.050537 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 4 17:29:00.050546 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:29:00.050554 kernel: Console: colour dummy device 80x25 Sep 4 17:29:00.050561 kernel: printk: console [tty1] enabled Sep 4 17:29:00.050569 kernel: printk: console [ttyS0] enabled Sep 4 17:29:00.050577 kernel: printk: bootconsole [earlyser0] disabled Sep 4 17:29:00.050585 kernel: ACPI: Core revision 20230628 Sep 4 17:29:00.050593 kernel: Failed to register legacy timer interrupt Sep 4 17:29:00.050604 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 17:29:00.050611 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 4 17:29:00.050619 kernel: Hyper-V: Using IPI hypercalls Sep 4 17:29:00.050627 kernel: APIC: send_IPI() replaced with hv_send_ipi() Sep 4 17:29:00.050635 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Sep 4 17:29:00.050643 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Sep 4 17:29:00.050655 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Sep 4 17:29:00.050663 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Sep 4 17:29:00.050674 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Sep 4 17:29:00.050685 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Sep 4 17:29:00.050696 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 4 17:29:00.050704 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Sep 4 17:29:00.050715 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 17:29:00.050722 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 17:29:00.050733 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 17:29:00.050741 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 4 17:29:00.050752 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 4 17:29:00.050760 kernel: RETBleed: Vulnerable Sep 4 17:29:00.050772 kernel: Speculative Store Bypass: Vulnerable Sep 4 17:29:00.050780 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 17:29:00.050792 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 17:29:00.050800 kernel: GDS: Unknown: Dependent on hypervisor status Sep 4 17:29:00.050811 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 17:29:00.050819 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 17:29:00.050830 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 17:29:00.050838 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 4 17:29:00.050849 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 4 17:29:00.050858 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 4 17:29:00.050869 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 17:29:00.050879 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 4 17:29:00.050893 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 4 17:29:00.050901 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 4 17:29:00.050912 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Sep 4 17:29:00.050923 kernel: Freeing SMP alternatives memory: 32K Sep 4 17:29:00.050931 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:29:00.050941 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Sep 4 17:29:00.050950 kernel: SELinux: Initializing. Sep 4 17:29:00.050960 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 17:29:00.050969 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 17:29:00.050979 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 4 17:29:00.050988 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:29:00.051001 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:29:00.051009 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:29:00.051022 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 4 17:29:00.051030 kernel: signal: max sigframe size: 3632 Sep 4 17:29:00.051041 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:29:00.051049 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:29:00.051061 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 4 17:29:00.051069 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:29:00.051080 kernel: smpboot: x86: Booting SMP configuration: Sep 4 17:29:00.051090 kernel: .... node #0, CPUs: #1 Sep 4 17:29:00.051101 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Sep 4 17:29:00.051110 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 4 17:29:00.051121 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 17:29:00.051129 kernel: smpboot: Max logical packages: 1 Sep 4 17:29:00.051140 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Sep 4 17:29:00.051148 kernel: devtmpfs: initialized Sep 4 17:29:00.051159 kernel: x86/mm: Memory block size: 128MB Sep 4 17:29:00.051170 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 4 17:29:00.051181 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:29:00.051190 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 17:29:00.051197 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:29:00.051208 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:29:00.051217 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:29:00.051226 kernel: audit: type=2000 audit(1725470938.027:1): state=initialized audit_enabled=0 res=1 Sep 4 17:29:00.051236 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:29:00.051247 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 17:29:00.051262 kernel: cpuidle: using governor menu Sep 4 17:29:00.051276 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:29:00.051297 kernel: dca service started, version 1.12.1 Sep 4 17:29:00.051315 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Sep 4 17:29:00.051331 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 17:29:00.051349 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:29:00.051366 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:29:00.051381 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:29:00.051396 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:29:00.051417 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:29:00.051435 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:29:00.051456 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:29:00.051473 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:29:00.051502 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:29:00.051521 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 17:29:00.051536 kernel: ACPI: Interpreter enabled Sep 4 17:29:00.051551 kernel: ACPI: PM: (supports S0 S5) Sep 4 17:29:00.051568 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 17:29:00.051592 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 17:29:00.051608 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 4 17:29:00.051623 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 4 17:29:00.051637 kernel: iommu: Default domain type: Translated Sep 4 17:29:00.051656 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 17:29:00.051671 kernel: efivars: Registered efivars operations Sep 4 17:29:00.051687 kernel: PCI: Using ACPI for IRQ routing Sep 4 17:29:00.051704 kernel: PCI: System does not support PCI Sep 4 17:29:00.051722 kernel: vgaarb: loaded Sep 4 17:29:00.051746 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Sep 4 17:29:00.051763 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:29:00.051781 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:29:00.051798 kernel: pnp: PnP ACPI init Sep 4 17:29:00.051814 kernel: pnp: PnP ACPI: found 3 devices Sep 4 17:29:00.051833 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 17:29:00.051852 kernel: NET: Registered PF_INET protocol family Sep 4 17:29:00.051870 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 17:29:00.051887 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 4 17:29:00.051908 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:29:00.051923 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:29:00.051938 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 4 17:29:00.051954 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 4 17:29:00.051972 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 17:29:00.051989 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 17:29:00.052002 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:29:00.055517 kernel: NET: Registered PF_XDP protocol family Sep 4 17:29:00.055536 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:29:00.055556 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 4 17:29:00.055571 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Sep 4 17:29:00.055585 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 4 17:29:00.055599 kernel: Initialise system trusted keyrings Sep 4 17:29:00.055612 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 4 17:29:00.055626 kernel: Key type asymmetric registered Sep 4 17:29:00.055639 kernel: Asymmetric key parser 'x509' registered Sep 4 17:29:00.055653 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 17:29:00.055666 kernel: io scheduler mq-deadline registered Sep 4 17:29:00.055684 kernel: io scheduler kyber registered Sep 4 17:29:00.055697 kernel: io scheduler bfq registered Sep 4 17:29:00.055711 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 17:29:00.055725 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:29:00.055738 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 17:29:00.055752 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 4 17:29:00.055765 kernel: i8042: PNP: No PS/2 controller found. Sep 4 17:29:00.055941 kernel: rtc_cmos 00:02: registered as rtc0 Sep 4 17:29:00.056212 kernel: rtc_cmos 00:02: setting system clock to 2024-09-04T17:28:59 UTC (1725470939) Sep 4 17:29:00.056348 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 4 17:29:00.056367 kernel: intel_pstate: CPU model not supported Sep 4 17:29:00.056382 kernel: efifb: probing for efifb Sep 4 17:29:00.056396 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 4 17:29:00.056409 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 4 17:29:00.056423 kernel: efifb: scrolling: redraw Sep 4 17:29:00.056436 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 4 17:29:00.056455 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 17:29:00.056469 kernel: fb0: EFI VGA frame buffer device Sep 4 17:29:00.056488 kernel: pstore: Using crash dump compression: deflate Sep 4 17:29:00.056511 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 17:29:00.057257 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:29:00.057275 kernel: Segment Routing with IPv6 Sep 4 17:29:00.057289 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:29:00.057304 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:29:00.057317 kernel: Key type dns_resolver registered Sep 4 17:29:00.057331 kernel: IPI shorthand broadcast: enabled Sep 4 17:29:00.057350 kernel: sched_clock: Marking stable (760002900, 38431400)->(964922300, -166488000) Sep 4 17:29:00.057363 kernel: registered taskstats version 1 Sep 4 17:29:00.057377 kernel: Loading compiled-in X.509 certificates Sep 4 17:29:00.057391 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: a53bb4e7e3319f75620f709d8a6c7aef0adb3b02' Sep 4 17:29:00.057404 kernel: Key type .fscrypt registered Sep 4 17:29:00.057418 kernel: Key type fscrypt-provisioning registered Sep 4 17:29:00.057431 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:29:00.057445 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:29:00.057461 kernel: ima: No architecture policies found Sep 4 17:29:00.057474 kernel: clk: Disabling unused clocks Sep 4 17:29:00.057501 kernel: Freeing unused kernel image (initmem) memory: 49336K Sep 4 17:29:00.057515 kernel: Write protecting the kernel read-only data: 36864k Sep 4 17:29:00.057528 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Sep 4 17:29:00.057541 kernel: Run /init as init process Sep 4 17:29:00.057554 kernel: with arguments: Sep 4 17:29:00.057567 kernel: /init Sep 4 17:29:00.057581 kernel: with environment: Sep 4 17:29:00.057595 kernel: HOME=/ Sep 4 17:29:00.057608 kernel: TERM=linux Sep 4 17:29:00.057622 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:29:00.057637 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:29:00.057653 systemd[1]: Detected virtualization microsoft. Sep 4 17:29:00.057668 systemd[1]: Detected architecture x86-64. Sep 4 17:29:00.057682 systemd[1]: Running in initrd. Sep 4 17:29:00.057695 systemd[1]: No hostname configured, using default hostname. Sep 4 17:29:00.057711 systemd[1]: Hostname set to . Sep 4 17:29:00.057726 systemd[1]: Initializing machine ID from random generator. Sep 4 17:29:00.057739 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:29:00.057755 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:29:00.057771 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:29:00.057788 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:29:00.057801 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:29:00.057816 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:29:00.057835 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:29:00.057854 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:29:00.057868 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:29:00.057882 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:29:00.057895 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:29:00.057909 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:29:00.057922 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:29:00.057938 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:29:00.057955 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:29:00.057970 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:29:00.057984 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:29:00.057997 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:29:00.058012 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:29:00.058027 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:29:00.058041 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:29:00.058059 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:29:00.058071 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:29:00.058085 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:29:00.058098 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:29:00.058112 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:29:00.058126 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:29:00.058141 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:29:00.058155 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:29:00.058169 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:00.058188 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:29:00.058229 systemd-journald[176]: Collecting audit messages is disabled. Sep 4 17:29:00.058264 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:29:00.058280 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:29:00.058302 systemd-journald[176]: Journal started Sep 4 17:29:00.058350 systemd-journald[176]: Runtime Journal (/run/log/journal/7748a771a5cb4db595b20ab08accff8e) is 8.0M, max 158.8M, 150.8M free. Sep 4 17:29:00.052361 systemd-modules-load[177]: Inserted module 'overlay' Sep 4 17:29:00.071935 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:29:00.078509 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:29:00.083313 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:00.088589 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:29:00.110798 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:29:00.119515 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:29:00.119551 kernel: Bridge firewalling registered Sep 4 17:29:00.118731 systemd-modules-load[177]: Inserted module 'br_netfilter' Sep 4 17:29:00.121726 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:29:00.134666 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:29:00.137111 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:29:00.145526 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:29:00.151152 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:29:00.162033 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:29:00.176812 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:29:00.182802 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:29:00.189726 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:29:00.201382 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:29:00.211733 dracut-cmdline[209]: dracut-dracut-053 Sep 4 17:29:00.216503 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:29:00.256018 systemd-resolved[213]: Positive Trust Anchors: Sep 4 17:29:00.256040 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:29:00.256083 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:29:00.278443 systemd-resolved[213]: Defaulting to hostname 'linux'. Sep 4 17:29:00.281768 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:29:00.284469 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:29:00.312512 kernel: SCSI subsystem initialized Sep 4 17:29:00.323507 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:29:00.336512 kernel: iscsi: registered transport (tcp) Sep 4 17:29:00.361527 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:29:00.361610 kernel: QLogic iSCSI HBA Driver Sep 4 17:29:00.397512 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:29:00.411660 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:29:00.441638 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:29:00.441718 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:29:00.444721 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:29:00.488511 kernel: raid6: avx512x4 gen() 18524 MB/s Sep 4 17:29:00.507508 kernel: raid6: avx512x2 gen() 18470 MB/s Sep 4 17:29:00.526501 kernel: raid6: avx512x1 gen() 18389 MB/s Sep 4 17:29:00.545500 kernel: raid6: avx2x4 gen() 18360 MB/s Sep 4 17:29:00.564503 kernel: raid6: avx2x2 gen() 18370 MB/s Sep 4 17:29:00.584095 kernel: raid6: avx2x1 gen() 13899 MB/s Sep 4 17:29:00.584129 kernel: raid6: using algorithm avx512x4 gen() 18524 MB/s Sep 4 17:29:00.605864 kernel: raid6: .... xor() 6883 MB/s, rmw enabled Sep 4 17:29:00.605900 kernel: raid6: using avx512x2 recovery algorithm Sep 4 17:29:00.631516 kernel: xor: automatically using best checksumming function avx Sep 4 17:29:00.798520 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:29:00.808388 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:29:00.816679 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:29:00.834662 systemd-udevd[396]: Using default interface naming scheme 'v255'. Sep 4 17:29:00.840688 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:29:00.850184 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:29:00.866990 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Sep 4 17:29:00.894307 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:29:00.902770 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:29:00.942151 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:29:00.960889 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:29:00.977486 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:29:00.988189 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:29:00.996820 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:29:01.003093 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:29:01.013674 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:29:01.024541 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 17:29:01.046916 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:29:01.060631 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 17:29:01.060695 kernel: AES CTR mode by8 optimization enabled Sep 4 17:29:01.069988 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:29:01.070152 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:29:01.076232 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:29:01.077115 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:29:01.077252 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:01.078274 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:01.102035 kernel: hv_vmbus: Vmbus version:5.2 Sep 4 17:29:01.101158 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:01.123213 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:29:01.123332 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:01.144520 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 4 17:29:01.144578 kernel: hv_vmbus: registering driver hv_storvsc Sep 4 17:29:01.145261 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:01.150909 kernel: scsi host0: storvsc_host_t Sep 4 17:29:01.151124 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 4 17:29:01.156528 kernel: scsi host1: storvsc_host_t Sep 4 17:29:01.160692 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 4 17:29:01.160801 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 4 17:29:01.164881 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 4 17:29:01.171453 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 4 17:29:01.196885 kernel: hv_vmbus: registering driver hv_netvsc Sep 4 17:29:01.196945 kernel: PTP clock support registered Sep 4 17:29:01.197860 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:01.214649 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:29:01.232814 kernel: hv_utils: Registering HyperV Utility Driver Sep 4 17:29:01.232892 kernel: hv_vmbus: registering driver hv_utils Sep 4 17:29:01.238523 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 4 17:29:01.238797 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 17:29:01.238815 kernel: hv_utils: Heartbeat IC version 3.0 Sep 4 17:29:01.242065 kernel: hv_utils: Shutdown IC version 3.2 Sep 4 17:29:01.243897 kernel: hv_utils: TimeSync IC version 4.0 Sep 4 17:29:01.895252 systemd-resolved[213]: Clock change detected. Flushing caches. Sep 4 17:29:01.905171 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 17:29:01.914177 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 4 17:29:01.915715 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:29:01.923175 kernel: hv_vmbus: registering driver hid_hyperv Sep 4 17:29:01.930008 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 4 17:29:01.930053 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 4 17:29:01.945412 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 4 17:29:01.945666 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 4 17:29:01.947924 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 4 17:29:01.948152 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 4 17:29:01.953200 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 4 17:29:01.957200 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:29:01.960186 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 4 17:29:02.055121 kernel: hv_netvsc 000d3a65-7ef1-000d-3a65-7ef1000d3a65 eth0: VF slot 1 added Sep 4 17:29:02.064079 kernel: hv_vmbus: registering driver hv_pci Sep 4 17:29:02.064127 kernel: hv_pci 805f6e05-8239-44a1-8b72-ae0c1cd31623: PCI VMBus probing: Using version 0x10004 Sep 4 17:29:02.071174 kernel: hv_pci 805f6e05-8239-44a1-8b72-ae0c1cd31623: PCI host bridge to bus 8239:00 Sep 4 17:29:02.071335 kernel: pci_bus 8239:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Sep 4 17:29:02.075084 kernel: pci_bus 8239:00: No busn resource found for root bus, will use [bus 00-ff] Sep 4 17:29:02.079324 kernel: pci 8239:00:02.0: [15b3:1016] type 00 class 0x020000 Sep 4 17:29:02.083200 kernel: pci 8239:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 4 17:29:02.086428 kernel: pci 8239:00:02.0: enabling Extended Tags Sep 4 17:29:02.096171 kernel: pci 8239:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 8239:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Sep 4 17:29:02.101254 kernel: pci_bus 8239:00: busn_res: [bus 00-ff] end is updated to 00 Sep 4 17:29:02.101547 kernel: pci 8239:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 4 17:29:02.286218 kernel: mlx5_core 8239:00:02.0: enabling device (0000 -> 0002) Sep 4 17:29:02.290193 kernel: mlx5_core 8239:00:02.0: firmware version: 14.30.1284 Sep 4 17:29:02.515346 kernel: hv_netvsc 000d3a65-7ef1-000d-3a65-7ef1000d3a65 eth0: VF registering: eth1 Sep 4 17:29:02.515718 kernel: mlx5_core 8239:00:02.0 eth1: joined to eth0 Sep 4 17:29:02.520181 kernel: mlx5_core 8239:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 4 17:29:02.528181 kernel: mlx5_core 8239:00:02.0 enP33337s1: renamed from eth1 Sep 4 17:29:02.802893 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 4 17:29:02.882189 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (465) Sep 4 17:29:02.904069 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 4 17:29:02.937135 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 4 17:29:02.992183 kernel: BTRFS: device fsid d110be6f-93a3-451a-b365-11b5d04e0602 devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (447) Sep 4 17:29:03.006370 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 4 17:29:03.011659 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 4 17:29:03.028315 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:29:03.040177 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:29:03.047181 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:29:04.056467 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:29:04.056537 disk-uuid[603]: The operation has completed successfully. Sep 4 17:29:04.154095 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:29:04.154234 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:29:04.171337 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:29:04.176618 sh[689]: Success Sep 4 17:29:04.212180 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 4 17:29:04.422453 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:29:04.435228 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:29:04.439741 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:29:04.457490 kernel: BTRFS info (device dm-0): first mount of filesystem d110be6f-93a3-451a-b365-11b5d04e0602 Sep 4 17:29:04.457552 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:29:04.460653 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:29:04.463103 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:29:04.465416 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:29:04.913059 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:29:04.915023 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:29:04.923412 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:29:04.928533 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:29:04.950125 kernel: BTRFS info (device sda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:29:04.950203 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:29:04.952823 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:29:04.996222 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:29:05.012216 kernel: BTRFS info (device sda6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:29:05.011767 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:29:05.015794 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:29:05.022432 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:29:05.031388 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:29:05.041299 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:29:05.059239 systemd-networkd[870]: lo: Link UP Sep 4 17:29:05.059250 systemd-networkd[870]: lo: Gained carrier Sep 4 17:29:05.064120 systemd-networkd[870]: Enumeration completed Sep 4 17:29:05.064228 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:29:05.066728 systemd[1]: Reached target network.target - Network. Sep 4 17:29:05.069548 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:29:05.069552 systemd-networkd[870]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:29:05.130191 kernel: mlx5_core 8239:00:02.0 enP33337s1: Link up Sep 4 17:29:05.168191 kernel: hv_netvsc 000d3a65-7ef1-000d-3a65-7ef1000d3a65 eth0: Data path switched to VF: enP33337s1 Sep 4 17:29:05.169235 systemd-networkd[870]: enP33337s1: Link UP Sep 4 17:29:05.169411 systemd-networkd[870]: eth0: Link UP Sep 4 17:29:05.169635 systemd-networkd[870]: eth0: Gained carrier Sep 4 17:29:05.169658 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:29:05.180392 systemd-networkd[870]: enP33337s1: Gained carrier Sep 4 17:29:05.204216 systemd-networkd[870]: eth0: DHCPv4 address 10.200.8.34/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 4 17:29:06.125043 ignition[873]: Ignition 2.18.0 Sep 4 17:29:06.125056 ignition[873]: Stage: fetch-offline Sep 4 17:29:06.125120 ignition[873]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:06.125134 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:29:06.125340 ignition[873]: parsed url from cmdline: "" Sep 4 17:29:06.125346 ignition[873]: no config URL provided Sep 4 17:29:06.125354 ignition[873]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:29:06.125365 ignition[873]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:29:06.125372 ignition[873]: failed to fetch config: resource requires networking Sep 4 17:29:06.128419 ignition[873]: Ignition finished successfully Sep 4 17:29:06.143328 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:29:06.152328 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 17:29:06.166864 ignition[882]: Ignition 2.18.0 Sep 4 17:29:06.166874 ignition[882]: Stage: fetch Sep 4 17:29:06.167096 ignition[882]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:06.167108 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:29:06.167227 ignition[882]: parsed url from cmdline: "" Sep 4 17:29:06.167232 ignition[882]: no config URL provided Sep 4 17:29:06.167237 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:29:06.167247 ignition[882]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:29:06.167269 ignition[882]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 4 17:29:06.249287 ignition[882]: GET result: OK Sep 4 17:29:06.249465 ignition[882]: config has been read from IMDS userdata Sep 4 17:29:06.249509 ignition[882]: parsing config with SHA512: 9191f7757c2c772b4cbd699f6ec5e79f4b4c6898404e0a42eca9284f632d7085df0e58db39e5dd1f8b348bc16bbeb297af57507038303ea6b407d2cb7df12e40 Sep 4 17:29:06.255419 unknown[882]: fetched base config from "system" Sep 4 17:29:06.256261 ignition[882]: fetch: fetch complete Sep 4 17:29:06.255445 unknown[882]: fetched base config from "system" Sep 4 17:29:06.256268 ignition[882]: fetch: fetch passed Sep 4 17:29:06.255461 unknown[882]: fetched user config from "azure" Sep 4 17:29:06.256329 ignition[882]: Ignition finished successfully Sep 4 17:29:06.258053 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 17:29:06.276350 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:29:06.293318 ignition[890]: Ignition 2.18.0 Sep 4 17:29:06.293329 ignition[890]: Stage: kargs Sep 4 17:29:06.293565 ignition[890]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:06.296633 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:29:06.293578 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:29:06.294501 ignition[890]: kargs: kargs passed Sep 4 17:29:06.294550 ignition[890]: Ignition finished successfully Sep 4 17:29:06.310469 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:29:06.323854 ignition[897]: Ignition 2.18.0 Sep 4 17:29:06.323864 ignition[897]: Stage: disks Sep 4 17:29:06.324104 ignition[897]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:06.326030 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:29:06.324118 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:29:06.329451 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:29:06.325039 ignition[897]: disks: disks passed Sep 4 17:29:06.333341 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:29:06.325086 ignition[897]: Ignition finished successfully Sep 4 17:29:06.347503 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:29:06.349769 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:29:06.354024 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:29:06.366339 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:29:06.422316 systemd-networkd[870]: eth0: Gained IPv6LL Sep 4 17:29:06.471310 systemd-fsck[906]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 4 17:29:06.476881 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:29:06.489377 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:29:06.593192 kernel: EXT4-fs (sda9): mounted filesystem 84a5cefa-c3c7-47d7-9305-7e6877f73628 r/w with ordered data mode. Quota mode: none. Sep 4 17:29:06.593710 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:29:06.596012 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:29:06.658247 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:29:06.662499 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:29:06.678055 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 4 17:29:06.680855 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (917) Sep 4 17:29:06.683792 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:29:06.697553 kernel: BTRFS info (device sda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:29:06.697594 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:29:06.697615 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:29:06.683848 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:29:06.700540 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:29:06.706000 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:29:06.708409 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:29:06.713666 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:29:06.870401 systemd-networkd[870]: enP33337s1: Gained IPv6LL Sep 4 17:29:07.432857 coreos-metadata[919]: Sep 04 17:29:07.432 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 4 17:29:07.437324 coreos-metadata[919]: Sep 04 17:29:07.435 INFO Fetch successful Sep 4 17:29:07.437324 coreos-metadata[919]: Sep 04 17:29:07.435 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 4 17:29:07.448211 coreos-metadata[919]: Sep 04 17:29:07.448 INFO Fetch successful Sep 4 17:29:07.467906 coreos-metadata[919]: Sep 04 17:29:07.467 INFO wrote hostname ci-3975.2.1-a-27f7f2cbdf to /sysroot/etc/hostname Sep 4 17:29:07.470221 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 17:29:07.665766 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:29:07.720706 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:29:07.727490 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:29:07.732236 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:29:09.045697 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:29:09.056274 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:29:09.061539 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:29:09.073545 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:29:09.078724 kernel: BTRFS info (device sda6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:29:09.099815 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:29:09.107601 ignition[1040]: INFO : Ignition 2.18.0 Sep 4 17:29:09.107601 ignition[1040]: INFO : Stage: mount Sep 4 17:29:09.113539 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:09.113539 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:29:09.113539 ignition[1040]: INFO : mount: mount passed Sep 4 17:29:09.113539 ignition[1040]: INFO : Ignition finished successfully Sep 4 17:29:09.109606 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:29:09.122224 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:29:09.129314 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:29:09.143176 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1052) Sep 4 17:29:09.147174 kernel: BTRFS info (device sda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:29:09.147208 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:29:09.151438 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:29:09.157180 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:29:09.158106 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:29:09.179550 ignition[1068]: INFO : Ignition 2.18.0 Sep 4 17:29:09.179550 ignition[1068]: INFO : Stage: files Sep 4 17:29:09.183321 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:09.183321 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:29:09.183321 ignition[1068]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:29:09.183321 ignition[1068]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:29:09.183321 ignition[1068]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:29:09.283704 ignition[1068]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:29:09.287956 ignition[1068]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:29:09.291641 unknown[1068]: wrote ssh authorized keys file for user: core Sep 4 17:29:09.293844 ignition[1068]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:29:09.338145 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:29:09.343630 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 17:29:09.684183 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:29:09.804238 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:29:09.809767 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:29:09.809767 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:29:09.809767 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:29:09.809767 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:29:09.809767 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:29:09.809767 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:29:09.809767 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:29:09.809767 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:29:09.809767 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:29:09.809767 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:29:09.809767 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:29:09.809767 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:29:09.809767 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:29:09.809767 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Sep 4 17:29:10.319006 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 17:29:10.645898 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:29:10.645898 ignition[1068]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 17:29:10.676035 ignition[1068]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:29:10.681700 ignition[1068]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:29:10.681700 ignition[1068]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 17:29:10.688750 ignition[1068]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:29:10.688750 ignition[1068]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:29:10.695017 ignition[1068]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:29:10.698870 ignition[1068]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:29:10.704724 ignition[1068]: INFO : files: files passed Sep 4 17:29:10.704724 ignition[1068]: INFO : Ignition finished successfully Sep 4 17:29:10.700971 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:29:10.713987 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:29:10.719313 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:29:10.722032 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:29:10.722141 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:29:10.736819 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:29:10.745502 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:29:10.745502 initrd-setup-root-after-ignition[1098]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:29:10.740894 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:29:10.754631 initrd-setup-root-after-ignition[1102]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:29:10.758329 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:29:10.783806 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:29:10.783929 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:29:10.789122 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:29:10.793917 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:29:10.798048 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:29:10.806368 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:29:10.820587 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:29:10.829306 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:29:10.839323 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:29:10.844438 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:29:10.849316 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:29:10.853268 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:29:10.853408 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:29:10.860246 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:29:10.862445 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:29:10.866282 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:29:10.868718 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:29:10.877320 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:29:10.882102 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:29:10.884351 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:29:10.891964 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:29:10.894269 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:29:10.898547 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:29:10.903940 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:29:10.904119 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:29:10.910412 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:29:10.915169 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:29:10.917666 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:29:10.919969 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:29:10.922603 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:29:10.924733 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:29:10.934061 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:29:10.934221 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:29:10.941791 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:29:10.941919 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:29:10.947903 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 4 17:29:10.948035 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 17:29:10.962364 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:29:10.967412 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:29:10.973479 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:29:10.973649 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:29:10.974505 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:29:10.974603 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:29:10.982706 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:29:10.982799 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:29:10.997891 ignition[1122]: INFO : Ignition 2.18.0 Sep 4 17:29:10.997891 ignition[1122]: INFO : Stage: umount Sep 4 17:29:10.997891 ignition[1122]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:10.997891 ignition[1122]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:29:10.997891 ignition[1122]: INFO : umount: umount passed Sep 4 17:29:10.997891 ignition[1122]: INFO : Ignition finished successfully Sep 4 17:29:11.001098 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:29:11.001254 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:29:11.004232 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:29:11.004332 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:29:11.011283 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:29:11.011332 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:29:11.015121 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 17:29:11.015183 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 17:29:11.018968 systemd[1]: Stopped target network.target - Network. Sep 4 17:29:11.019724 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:29:11.019777 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:29:11.020096 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:29:11.020424 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:29:11.027201 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:29:11.058450 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:29:11.060473 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:29:11.066642 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:29:11.066706 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:29:11.068925 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:29:11.068973 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:29:11.072843 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:29:11.074701 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:29:11.084006 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:29:11.084066 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:29:11.086449 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:29:11.092801 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:29:11.097203 systemd-networkd[870]: eth0: DHCPv6 lease lost Sep 4 17:29:11.098951 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:29:11.100096 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:29:11.100208 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:29:11.106239 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:29:11.106309 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:29:11.118519 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:29:11.120596 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:29:11.122782 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:29:11.125785 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:29:11.131874 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:29:11.131969 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:29:11.145561 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:29:11.145672 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:29:11.151622 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:29:11.152697 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:29:11.158736 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:29:11.158790 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:29:11.163853 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:29:11.163989 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:29:11.170204 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:29:11.170291 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:29:11.174571 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:29:11.174628 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:29:11.180029 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:29:11.180077 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:29:11.186801 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:29:11.186845 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:29:11.206643 kernel: hv_netvsc 000d3a65-7ef1-000d-3a65-7ef1000d3a65 eth0: Data path switched from VF: enP33337s1 Sep 4 17:29:11.191392 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:29:11.191434 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:29:11.214352 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:29:11.219171 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:29:11.219369 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:29:11.227413 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 17:29:11.227482 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:29:11.230235 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:29:11.230297 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:29:11.233452 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:29:11.233498 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:11.250888 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:29:11.251019 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:29:11.255321 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:29:11.255404 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:29:11.616860 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:29:11.617021 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:29:11.622086 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:29:11.625929 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:29:11.625995 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:29:11.640381 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:29:12.089379 systemd[1]: Switching root. Sep 4 17:29:12.115025 systemd-journald[176]: Journal stopped Sep 4 17:29:18.465851 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Sep 4 17:29:18.465896 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:29:18.465914 kernel: SELinux: policy capability open_perms=1 Sep 4 17:29:18.465928 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:29:18.465942 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:29:18.465956 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:29:18.465971 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:29:18.465990 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:29:18.466003 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:29:18.466018 kernel: audit: type=1403 audit(1725470953.394:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:29:18.466036 systemd[1]: Successfully loaded SELinux policy in 251.038ms. Sep 4 17:29:18.466053 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.368ms. Sep 4 17:29:18.466070 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:29:18.466086 systemd[1]: Detected virtualization microsoft. Sep 4 17:29:18.466107 systemd[1]: Detected architecture x86-64. Sep 4 17:29:18.466123 systemd[1]: Detected first boot. Sep 4 17:29:18.466140 systemd[1]: Hostname set to . Sep 4 17:29:18.466176 systemd[1]: Initializing machine ID from random generator. Sep 4 17:29:18.466194 zram_generator::config[1166]: No configuration found. Sep 4 17:29:18.466214 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:29:18.466230 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:29:18.466246 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:29:18.466262 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:29:18.466280 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:29:18.466296 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:29:18.466314 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:29:18.466333 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:29:18.466350 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:29:18.466367 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:29:18.466384 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:29:18.466403 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:29:18.466421 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:29:18.466438 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:29:18.466455 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:29:18.466475 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:29:18.466492 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:29:18.466509 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:29:18.466526 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 17:29:18.466543 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:29:18.466560 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:29:18.466581 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:29:18.466598 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:29:18.466618 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:29:18.466635 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:29:18.466653 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:29:18.466669 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:29:18.466687 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:29:18.466704 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:29:18.466721 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:29:18.466741 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:29:18.466758 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:29:18.466776 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:29:18.466797 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:29:18.466815 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:29:18.466835 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:29:18.466853 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:29:18.466871 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:29:18.466889 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:29:18.466906 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:29:18.466923 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:29:18.466941 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:29:18.466959 systemd[1]: Reached target machines.target - Containers. Sep 4 17:29:18.466978 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:29:18.466995 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:29:18.467014 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:29:18.467031 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:29:18.467049 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:29:18.467066 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:29:18.467084 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:29:18.467101 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:29:18.467118 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:29:18.467140 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:29:18.467186 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:29:18.467205 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:29:18.467223 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:29:18.467241 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:29:18.467258 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:29:18.467300 systemd-journald[1264]: Collecting audit messages is disabled. Sep 4 17:29:18.467339 kernel: loop: module loaded Sep 4 17:29:18.467355 systemd-journald[1264]: Journal started Sep 4 17:29:18.467390 systemd-journald[1264]: Runtime Journal (/run/log/journal/d3801f871dd748d0900c0c6e337d12be) is 8.0M, max 158.8M, 150.8M free. Sep 4 17:29:17.704656 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:29:17.924154 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 4 17:29:17.924580 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:29:18.481449 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:29:18.489209 kernel: fuse: init (API version 7.39) Sep 4 17:29:18.489260 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:29:18.500176 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:29:18.518179 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:29:18.525759 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:29:18.525811 systemd[1]: Stopped verity-setup.service. Sep 4 17:29:18.535533 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:29:18.552300 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:29:18.547216 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:29:18.556079 kernel: ACPI: bus type drm_connector registered Sep 4 17:29:18.549812 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:29:18.552729 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:29:18.556873 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:29:18.559575 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:29:18.562311 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:29:18.564648 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:29:18.567368 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:29:18.570531 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:29:18.570740 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:29:18.573447 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:29:18.573606 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:29:18.576223 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:29:18.576380 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:29:18.579094 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:29:18.579310 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:29:18.582219 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:29:18.582376 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:29:18.584861 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:29:18.585014 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:29:18.587522 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:29:18.590642 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:29:18.602928 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:29:18.613570 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:29:18.617265 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:29:18.619946 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:29:18.619985 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:29:18.625412 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:29:18.635091 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:29:18.642331 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:29:18.644718 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:29:18.669892 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:29:18.680329 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:29:18.683036 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:29:18.689327 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:29:18.691746 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:29:18.693911 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:29:18.701279 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:29:18.707217 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:29:18.713515 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:29:18.716562 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:29:18.719296 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:29:18.722056 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:29:18.725062 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:29:18.733400 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:29:18.741621 kernel: loop0: detected capacity change from 0 to 211296 Sep 4 17:29:18.741674 kernel: block loop0: the capability attribute has been deprecated. Sep 4 17:29:18.745405 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:29:18.753047 systemd-journald[1264]: Time spent on flushing to /var/log/journal/d3801f871dd748d0900c0c6e337d12be is 87.152ms for 965 entries. Sep 4 17:29:18.753047 systemd-journald[1264]: System Journal (/var/log/journal/d3801f871dd748d0900c0c6e337d12be) is 8.0M, max 2.6G, 2.6G free. Sep 4 17:29:18.882153 systemd-journald[1264]: Received client request to flush runtime journal. Sep 4 17:29:18.882232 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:29:18.758333 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:29:18.766329 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:29:18.792368 udevadm[1312]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 17:29:18.884647 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:29:18.912070 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Sep 4 17:29:18.912108 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Sep 4 17:29:18.922827 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:29:18.931467 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:29:18.941180 kernel: loop1: detected capacity change from 0 to 56904 Sep 4 17:29:18.945772 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:29:18.946757 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:29:18.993480 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:29:19.123569 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:29:19.131410 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:29:19.155886 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Sep 4 17:29:19.156328 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Sep 4 17:29:19.163232 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:29:19.350191 kernel: loop2: detected capacity change from 0 to 80568 Sep 4 17:29:19.897192 kernel: loop3: detected capacity change from 0 to 139904 Sep 4 17:29:19.978959 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:29:19.988336 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:29:20.010704 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Sep 4 17:29:20.233766 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:29:20.253336 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:29:20.298629 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 17:29:20.338568 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1333) Sep 4 17:29:20.357359 kernel: loop4: detected capacity change from 0 to 211296 Sep 4 17:29:20.372202 kernel: loop5: detected capacity change from 0 to 56904 Sep 4 17:29:20.383527 kernel: loop6: detected capacity change from 0 to 80568 Sep 4 17:29:20.396181 kernel: loop7: detected capacity change from 0 to 139904 Sep 4 17:29:20.412105 (sd-merge)[1353]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 4 17:29:20.412736 (sd-merge)[1353]: Merged extensions into '/usr'. Sep 4 17:29:20.423326 kernel: hv_vmbus: registering driver hyperv_fb Sep 4 17:29:20.423402 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 4 17:29:20.439042 kernel: hv_vmbus: registering driver hv_balloon Sep 4 17:29:20.439136 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 4 17:29:20.450202 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 17:29:20.458974 kernel: Console: switching to colour dummy device 80x25 Sep 4 17:29:20.468189 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 4 17:29:20.477863 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 17:29:20.480085 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:29:20.490210 systemd[1]: Reloading requested from client PID 1299 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:29:20.490234 systemd[1]: Reloading... Sep 4 17:29:20.668181 zram_generator::config[1393]: No configuration found. Sep 4 17:29:20.860177 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1341) Sep 4 17:29:21.008192 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Sep 4 17:29:21.049975 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:29:21.138083 systemd-networkd[1340]: lo: Link UP Sep 4 17:29:21.138097 systemd-networkd[1340]: lo: Gained carrier Sep 4 17:29:21.141522 systemd-networkd[1340]: Enumeration completed Sep 4 17:29:21.141965 systemd-networkd[1340]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:29:21.141977 systemd-networkd[1340]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:29:21.176691 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 4 17:29:21.179638 systemd[1]: Reloading finished in 688 ms. Sep 4 17:29:21.200222 kernel: mlx5_core 8239:00:02.0 enP33337s1: Link up Sep 4 17:29:21.209398 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:29:21.212248 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:29:21.214941 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:29:21.223335 kernel: hv_netvsc 000d3a65-7ef1-000d-3a65-7ef1000d3a65 eth0: Data path switched to VF: enP33337s1 Sep 4 17:29:21.224014 systemd-networkd[1340]: enP33337s1: Link UP Sep 4 17:29:21.224189 systemd-networkd[1340]: eth0: Link UP Sep 4 17:29:21.224197 systemd-networkd[1340]: eth0: Gained carrier Sep 4 17:29:21.224219 systemd-networkd[1340]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:29:21.228514 systemd-networkd[1340]: enP33337s1: Gained carrier Sep 4 17:29:21.255456 systemd[1]: Starting ensure-sysext.service... Sep 4 17:29:21.258385 systemd-networkd[1340]: eth0: DHCPv4 address 10.200.8.34/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 4 17:29:21.260472 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:29:21.266321 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:29:21.272780 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:29:21.283446 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:21.296291 systemd[1]: Reloading requested from client PID 1494 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:29:21.296306 systemd[1]: Reloading... Sep 4 17:29:21.323835 systemd-tmpfiles[1497]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:29:21.326499 systemd-tmpfiles[1497]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:29:21.327986 systemd-tmpfiles[1497]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:29:21.328510 systemd-tmpfiles[1497]: ACLs are not supported, ignoring. Sep 4 17:29:21.328634 systemd-tmpfiles[1497]: ACLs are not supported, ignoring. Sep 4 17:29:21.366682 systemd-tmpfiles[1497]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:29:21.366872 systemd-tmpfiles[1497]: Skipping /boot Sep 4 17:29:21.373186 zram_generator::config[1534]: No configuration found. Sep 4 17:29:21.381448 systemd-tmpfiles[1497]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:29:21.381593 systemd-tmpfiles[1497]: Skipping /boot Sep 4 17:29:21.515896 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:29:21.598504 systemd[1]: Reloading finished in 301 ms. Sep 4 17:29:21.614923 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:29:21.619049 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:29:21.622134 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:29:21.641428 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:29:21.657919 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:29:21.664276 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:29:21.672432 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:29:21.680268 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:29:21.684574 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:29:21.691362 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:29:21.691617 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:29:21.693145 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:29:21.704557 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:29:21.710934 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:29:21.715705 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:29:21.715882 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:29:21.717615 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:29:21.717816 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:29:21.721906 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:29:21.722300 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:29:21.742197 lvm[1596]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:29:21.745832 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:29:21.747573 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:29:21.759344 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:29:21.770534 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:29:21.774312 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:29:21.774672 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:29:21.781474 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:29:21.781674 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:29:21.787624 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:29:21.791394 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:29:21.791569 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:29:21.796801 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:29:21.796987 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:29:21.806432 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:29:21.818173 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:29:21.822296 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:29:21.822535 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:29:21.830340 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:29:21.838260 lvm[1625]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:29:21.845046 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:29:21.856834 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:29:21.863581 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:29:21.873355 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:29:21.875950 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:29:21.876037 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:29:21.878429 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:29:21.879333 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:21.882511 systemd[1]: Finished ensure-sysext.service. Sep 4 17:29:21.885212 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:29:21.890367 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:29:21.890563 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:29:21.893741 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:29:21.893928 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:29:21.896882 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:29:21.897055 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:29:21.903745 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:29:21.906841 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:29:21.907180 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:29:21.915566 systemd-resolved[1599]: Positive Trust Anchors: Sep 4 17:29:21.915585 systemd-resolved[1599]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:29:21.915647 systemd-resolved[1599]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:29:21.919985 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:29:21.920068 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:29:21.929430 augenrules[1635]: No rules Sep 4 17:29:21.930674 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:29:21.983524 systemd-resolved[1599]: Using system hostname 'ci-3975.2.1-a-27f7f2cbdf'. Sep 4 17:29:21.985942 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:29:21.988782 systemd[1]: Reached target network.target - Network. Sep 4 17:29:21.991056 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:29:22.358475 systemd-networkd[1340]: eth0: Gained IPv6LL Sep 4 17:29:22.361342 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:29:22.365496 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:29:22.441687 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:29:22.444908 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:29:22.550429 systemd-networkd[1340]: enP33337s1: Gained IPv6LL Sep 4 17:29:25.507911 ldconfig[1295]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:29:25.517634 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:29:25.526360 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:29:25.539869 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:29:25.542510 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:29:25.545275 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:29:25.548085 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:29:25.551177 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:29:25.553546 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:29:25.556323 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:29:25.559033 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:29:25.559078 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:29:25.561067 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:29:25.564136 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:29:25.567982 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:29:25.598737 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:29:25.602057 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:29:25.605098 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:29:25.607675 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:29:25.609755 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:29:25.609800 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:29:25.616272 systemd[1]: Starting chronyd.service - NTP client/server... Sep 4 17:29:25.620291 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:29:25.635300 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 17:29:25.641561 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:29:25.653169 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:29:25.657030 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:29:25.660232 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:29:25.661971 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:29:25.667552 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:29:25.674329 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:29:25.676502 (chronyd)[1655]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Sep 4 17:29:25.679795 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:29:25.692356 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:29:25.693650 jq[1661]: false Sep 4 17:29:25.705658 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:29:25.707524 chronyd[1673]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Sep 4 17:29:25.713639 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:29:25.716779 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:29:25.718298 chronyd[1673]: Timezone right/UTC failed leap second check, ignoring Sep 4 17:29:25.718006 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:29:25.718519 chronyd[1673]: Loaded seccomp filter (level 2) Sep 4 17:29:25.724311 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:29:25.733426 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:29:25.739503 systemd[1]: Started chronyd.service - NTP client/server. Sep 4 17:29:25.753578 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:29:25.753829 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:29:25.762599 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:29:25.762838 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:29:25.775646 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:29:25.779182 extend-filesystems[1662]: Found loop4 Sep 4 17:29:25.779182 extend-filesystems[1662]: Found loop5 Sep 4 17:29:25.779182 extend-filesystems[1662]: Found loop6 Sep 4 17:29:25.779182 extend-filesystems[1662]: Found loop7 Sep 4 17:29:25.779182 extend-filesystems[1662]: Found sda Sep 4 17:29:25.779182 extend-filesystems[1662]: Found sda1 Sep 4 17:29:25.779182 extend-filesystems[1662]: Found sda2 Sep 4 17:29:25.779182 extend-filesystems[1662]: Found sda3 Sep 4 17:29:25.779182 extend-filesystems[1662]: Found usr Sep 4 17:29:25.779182 extend-filesystems[1662]: Found sda4 Sep 4 17:29:25.779182 extend-filesystems[1662]: Found sda6 Sep 4 17:29:25.779182 extend-filesystems[1662]: Found sda7 Sep 4 17:29:25.779182 extend-filesystems[1662]: Found sda9 Sep 4 17:29:25.779182 extend-filesystems[1662]: Checking size of /dev/sda9 Sep 4 17:29:25.775900 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:29:25.833786 jq[1681]: true Sep 4 17:29:25.820086 (ntainerd)[1692]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:29:25.836908 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:29:25.847481 jq[1698]: true Sep 4 17:29:25.889359 systemd-logind[1674]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 17:29:25.889619 systemd-logind[1674]: New seat seat0. Sep 4 17:29:25.890301 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:29:25.905040 dbus-daemon[1658]: [system] SELinux support is enabled Sep 4 17:29:25.905215 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:29:25.913676 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:29:25.913715 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:29:25.917278 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:29:25.917308 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:29:25.931578 extend-filesystems[1662]: Old size kept for /dev/sda9 Sep 4 17:29:25.933796 extend-filesystems[1662]: Found sr0 Sep 4 17:29:25.952980 update_engine[1676]: I0904 17:29:25.952903 1676 main.cc:92] Flatcar Update Engine starting Sep 4 17:29:25.954046 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:29:25.954292 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:29:25.962178 tar[1686]: linux-amd64/helm Sep 4 17:29:25.970749 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:29:25.979790 update_engine[1676]: I0904 17:29:25.979746 1676 update_check_scheduler.cc:74] Next update check in 2m43s Sep 4 17:29:25.984790 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:29:26.050224 bash[1727]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:29:26.058293 sshd_keygen[1687]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:29:26.065337 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:29:26.068635 coreos-metadata[1657]: Sep 04 17:29:26.068 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 4 17:29:26.072306 coreos-metadata[1657]: Sep 04 17:29:26.071 INFO Fetch successful Sep 4 17:29:26.072306 coreos-metadata[1657]: Sep 04 17:29:26.071 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 4 17:29:26.075791 coreos-metadata[1657]: Sep 04 17:29:26.075 INFO Fetch successful Sep 4 17:29:26.076090 coreos-metadata[1657]: Sep 04 17:29:26.076 INFO Fetching http://168.63.129.16/machine/c9d31c83-58fa-4dc0-92f2-fa0ec8698d41/1c95270a%2D5af5%2D4787%2Da1e7%2D0297673f487e.%5Fci%2D3975.2.1%2Da%2D27f7f2cbdf?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 4 17:29:26.078489 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 17:29:26.079312 coreos-metadata[1657]: Sep 04 17:29:26.079 INFO Fetch successful Sep 4 17:29:26.080057 coreos-metadata[1657]: Sep 04 17:29:26.080 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 4 17:29:26.094415 coreos-metadata[1657]: Sep 04 17:29:26.094 INFO Fetch successful Sep 4 17:29:26.159690 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 17:29:26.169796 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:29:26.177225 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:29:26.196567 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:29:26.210353 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 4 17:29:26.238029 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1733) Sep 4 17:29:26.276944 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:29:26.277619 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:29:26.323194 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:29:26.382126 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 4 17:29:26.390249 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:29:26.394064 locksmithd[1728]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:29:26.400339 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:29:26.413546 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 17:29:26.421471 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:29:26.861397 tar[1686]: linux-amd64/LICENSE Sep 4 17:29:26.861639 tar[1686]: linux-amd64/README.md Sep 4 17:29:26.875256 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:29:26.920540 containerd[1692]: time="2024-09-04T17:29:26.919045600Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Sep 4 17:29:26.956624 containerd[1692]: time="2024-09-04T17:29:26.955825300Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:29:26.956624 containerd[1692]: time="2024-09-04T17:29:26.955884500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:29:26.958419 containerd[1692]: time="2024-09-04T17:29:26.957813400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:29:26.958419 containerd[1692]: time="2024-09-04T17:29:26.957860100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:29:26.958419 containerd[1692]: time="2024-09-04T17:29:26.958121800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:29:26.958419 containerd[1692]: time="2024-09-04T17:29:26.958149800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:29:26.958419 containerd[1692]: time="2024-09-04T17:29:26.958267700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:29:26.958419 containerd[1692]: time="2024-09-04T17:29:26.958335300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:29:26.958419 containerd[1692]: time="2024-09-04T17:29:26.958358200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:29:26.958707 containerd[1692]: time="2024-09-04T17:29:26.958440400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:29:26.958707 containerd[1692]: time="2024-09-04T17:29:26.958680000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:29:26.958781 containerd[1692]: time="2024-09-04T17:29:26.958709700Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 4 17:29:26.958781 containerd[1692]: time="2024-09-04T17:29:26.958729200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:29:26.959204 containerd[1692]: time="2024-09-04T17:29:26.958997500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:29:26.959204 containerd[1692]: time="2024-09-04T17:29:26.959026200Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:29:26.959204 containerd[1692]: time="2024-09-04T17:29:26.959104200Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 4 17:29:26.959204 containerd[1692]: time="2024-09-04T17:29:26.959118900Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:29:26.975601 containerd[1692]: time="2024-09-04T17:29:26.975107900Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:29:26.975601 containerd[1692]: time="2024-09-04T17:29:26.975194700Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:29:26.975601 containerd[1692]: time="2024-09-04T17:29:26.975215600Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:29:26.975601 containerd[1692]: time="2024-09-04T17:29:26.975251700Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:29:26.975601 containerd[1692]: time="2024-09-04T17:29:26.975271500Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:29:26.975601 containerd[1692]: time="2024-09-04T17:29:26.975287200Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:29:26.975601 containerd[1692]: time="2024-09-04T17:29:26.975302800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:29:26.975601 containerd[1692]: time="2024-09-04T17:29:26.975433000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:29:26.975601 containerd[1692]: time="2024-09-04T17:29:26.975456000Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:29:26.975601 containerd[1692]: time="2024-09-04T17:29:26.975476200Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:29:26.975601 containerd[1692]: time="2024-09-04T17:29:26.975495900Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:29:26.975601 containerd[1692]: time="2024-09-04T17:29:26.975515000Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:29:26.975601 containerd[1692]: time="2024-09-04T17:29:26.975537000Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:29:26.975601 containerd[1692]: time="2024-09-04T17:29:26.975553900Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:29:26.976104 containerd[1692]: time="2024-09-04T17:29:26.975571500Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:29:26.976104 containerd[1692]: time="2024-09-04T17:29:26.975609600Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:29:26.976104 containerd[1692]: time="2024-09-04T17:29:26.975630000Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:29:26.976104 containerd[1692]: time="2024-09-04T17:29:26.975647200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:29:26.976104 containerd[1692]: time="2024-09-04T17:29:26.975662300Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:29:26.976104 containerd[1692]: time="2024-09-04T17:29:26.975790400Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:29:26.976104 containerd[1692]: time="2024-09-04T17:29:26.976077200Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:29:26.976350 containerd[1692]: time="2024-09-04T17:29:26.976108100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:29:26.976350 containerd[1692]: time="2024-09-04T17:29:26.976127400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:29:26.976350 containerd[1692]: time="2024-09-04T17:29:26.976176800Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:29:26.976350 containerd[1692]: time="2024-09-04T17:29:26.976255400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:29:26.976350 containerd[1692]: time="2024-09-04T17:29:26.976274000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:29:26.976350 containerd[1692]: time="2024-09-04T17:29:26.976293300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:29:26.976350 containerd[1692]: time="2024-09-04T17:29:26.976310400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:29:26.976350 containerd[1692]: time="2024-09-04T17:29:26.976328400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:29:26.976350 containerd[1692]: time="2024-09-04T17:29:26.976346300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:29:26.976649 containerd[1692]: time="2024-09-04T17:29:26.976363700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:29:26.976649 containerd[1692]: time="2024-09-04T17:29:26.976380500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:29:26.976649 containerd[1692]: time="2024-09-04T17:29:26.976398100Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:29:26.976649 containerd[1692]: time="2024-09-04T17:29:26.976552000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:29:26.976649 containerd[1692]: time="2024-09-04T17:29:26.976574100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:29:26.976649 containerd[1692]: time="2024-09-04T17:29:26.976590800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:29:26.976649 containerd[1692]: time="2024-09-04T17:29:26.976610200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:29:26.976649 containerd[1692]: time="2024-09-04T17:29:26.976628000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:29:26.976649 containerd[1692]: time="2024-09-04T17:29:26.976647500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:29:26.976934 containerd[1692]: time="2024-09-04T17:29:26.976666000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:29:26.976934 containerd[1692]: time="2024-09-04T17:29:26.976683800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:29:26.977185 containerd[1692]: time="2024-09-04T17:29:26.977024300Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:29:26.977185 containerd[1692]: time="2024-09-04T17:29:26.977129700Z" level=info msg="Connect containerd service" Sep 4 17:29:26.977185 containerd[1692]: time="2024-09-04T17:29:26.977180100Z" level=info msg="using legacy CRI server" Sep 4 17:29:26.977467 containerd[1692]: time="2024-09-04T17:29:26.977191700Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:29:26.977467 containerd[1692]: time="2024-09-04T17:29:26.977297600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:29:26.978746 containerd[1692]: time="2024-09-04T17:29:26.978188000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:29:26.978746 containerd[1692]: time="2024-09-04T17:29:26.978276500Z" level=info msg="Start subscribing containerd event" Sep 4 17:29:26.978746 containerd[1692]: time="2024-09-04T17:29:26.978326200Z" level=info msg="Start recovering state" Sep 4 17:29:26.978746 containerd[1692]: time="2024-09-04T17:29:26.978397600Z" level=info msg="Start event monitor" Sep 4 17:29:26.978746 containerd[1692]: time="2024-09-04T17:29:26.978415000Z" level=info msg="Start snapshots syncer" Sep 4 17:29:26.978746 containerd[1692]: time="2024-09-04T17:29:26.978426500Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:29:26.978746 containerd[1692]: time="2024-09-04T17:29:26.978436900Z" level=info msg="Start streaming server" Sep 4 17:29:26.979067 containerd[1692]: time="2024-09-04T17:29:26.979039900Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:29:26.979194 containerd[1692]: time="2024-09-04T17:29:26.979130300Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:29:26.979329 containerd[1692]: time="2024-09-04T17:29:26.979152500Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:29:26.979329 containerd[1692]: time="2024-09-04T17:29:26.979281400Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:29:26.979556 containerd[1692]: time="2024-09-04T17:29:26.979528900Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:29:26.979619 containerd[1692]: time="2024-09-04T17:29:26.979586300Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:29:26.979857 containerd[1692]: time="2024-09-04T17:29:26.979655100Z" level=info msg="containerd successfully booted in 0.062997s" Sep 4 17:29:26.979772 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:29:27.184388 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:29:27.188353 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:29:27.188465 (kubelet)[1816]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:29:27.191423 systemd[1]: Startup finished in 843ms (firmware) + 34.548s (loader) + 895ms (kernel) + 12.808s (initrd) + 14.047s (userspace) = 1min 3.143s. Sep 4 17:29:27.621093 login[1799]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 4 17:29:27.624585 login[1800]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 4 17:29:27.641494 systemd-logind[1674]: New session 2 of user core. Sep 4 17:29:27.643120 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:29:27.649462 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:29:27.654679 systemd-logind[1674]: New session 1 of user core. Sep 4 17:29:27.676755 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:29:27.685550 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:29:27.691195 (systemd)[1828]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:29:27.873137 waagent[1796]: 2024-09-04T17:29:27.872953Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Sep 4 17:29:27.876594 waagent[1796]: 2024-09-04T17:29:27.876513Z INFO Daemon Daemon OS: flatcar 3975.2.1 Sep 4 17:29:27.878810 waagent[1796]: 2024-09-04T17:29:27.878745Z INFO Daemon Daemon Python: 3.11.9 Sep 4 17:29:27.880866 waagent[1796]: 2024-09-04T17:29:27.880792Z INFO Daemon Daemon Run daemon Sep 4 17:29:27.884447 waagent[1796]: 2024-09-04T17:29:27.882698Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3975.2.1' Sep 4 17:29:27.884447 waagent[1796]: 2024-09-04T17:29:27.883602Z INFO Daemon Daemon Using waagent for provisioning Sep 4 17:29:27.885020 waagent[1796]: 2024-09-04T17:29:27.884980Z INFO Daemon Daemon Activate resource disk Sep 4 17:29:27.885711 waagent[1796]: 2024-09-04T17:29:27.885673Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 4 17:29:27.890021 waagent[1796]: 2024-09-04T17:29:27.889976Z INFO Daemon Daemon Found device: None Sep 4 17:29:27.890674 waagent[1796]: 2024-09-04T17:29:27.890636Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 4 17:29:27.891573 waagent[1796]: 2024-09-04T17:29:27.891536Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 4 17:29:27.894097 waagent[1796]: 2024-09-04T17:29:27.894050Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 4 17:29:27.895004 waagent[1796]: 2024-09-04T17:29:27.894967Z INFO Daemon Daemon Running default provisioning handler Sep 4 17:29:27.918182 waagent[1796]: 2024-09-04T17:29:27.918061Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 4 17:29:27.920281 waagent[1796]: 2024-09-04T17:29:27.920221Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 4 17:29:27.921239 waagent[1796]: 2024-09-04T17:29:27.921199Z INFO Daemon Daemon cloud-init is enabled: False Sep 4 17:29:27.921934 waagent[1796]: 2024-09-04T17:29:27.921900Z INFO Daemon Daemon Copying ovf-env.xml Sep 4 17:29:28.033616 waagent[1796]: 2024-09-04T17:29:28.030400Z INFO Daemon Daemon Successfully mounted dvd Sep 4 17:29:28.067677 systemd[1828]: Queued start job for default target default.target. Sep 4 17:29:28.079661 systemd[1828]: Created slice app.slice - User Application Slice. Sep 4 17:29:28.079877 systemd[1828]: Reached target paths.target - Paths. Sep 4 17:29:28.079975 systemd[1828]: Reached target timers.target - Timers. Sep 4 17:29:28.080759 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 4 17:29:28.086377 waagent[1796]: 2024-09-04T17:29:28.083987Z INFO Daemon Daemon Detect protocol endpoint Sep 4 17:29:28.086377 waagent[1796]: 2024-09-04T17:29:28.085318Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 4 17:29:28.086377 waagent[1796]: 2024-09-04T17:29:28.086060Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 4 17:29:28.086927 waagent[1796]: 2024-09-04T17:29:28.086886Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 4 17:29:28.087812 waagent[1796]: 2024-09-04T17:29:28.087770Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 4 17:29:28.088453 waagent[1796]: 2024-09-04T17:29:28.088416Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 4 17:29:28.097886 systemd[1828]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:29:28.116938 waagent[1796]: 2024-09-04T17:29:28.116858Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 4 17:29:28.120243 waagent[1796]: 2024-09-04T17:29:28.120203Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 4 17:29:28.121705 waagent[1796]: 2024-09-04T17:29:28.121659Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 4 17:29:28.123589 systemd[1828]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:29:28.124689 systemd[1828]: Reached target sockets.target - Sockets. Sep 4 17:29:28.124847 systemd[1828]: Reached target basic.target - Basic System. Sep 4 17:29:28.124996 systemd[1828]: Reached target default.target - Main User Target. Sep 4 17:29:28.125111 systemd[1828]: Startup finished in 424ms. Sep 4 17:29:28.125354 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:29:28.134370 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:29:28.135390 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:29:28.137066 kubelet[1816]: E0904 17:29:28.135952 1816 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:29:28.139867 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:29:28.140035 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:29:28.142256 systemd[1]: kubelet.service: Consumed 1.018s CPU time. Sep 4 17:29:28.280458 waagent[1796]: 2024-09-04T17:29:28.280347Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 4 17:29:28.283701 waagent[1796]: 2024-09-04T17:29:28.283634Z INFO Daemon Daemon Forcing an update of the goal state. Sep 4 17:29:28.289559 waagent[1796]: 2024-09-04T17:29:28.289503Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 4 17:29:28.331307 waagent[1796]: 2024-09-04T17:29:28.331229Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.154 Sep 4 17:29:28.347237 waagent[1796]: 2024-09-04T17:29:28.333022Z INFO Daemon Sep 4 17:29:28.347237 waagent[1796]: 2024-09-04T17:29:28.334800Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 37f69f1f-1928-48e7-b1b1-85836d340a4c eTag: 2424834199543901975 source: Fabric] Sep 4 17:29:28.347237 waagent[1796]: 2024-09-04T17:29:28.336342Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 4 17:29:28.347237 waagent[1796]: 2024-09-04T17:29:28.337940Z INFO Daemon Sep 4 17:29:28.347237 waagent[1796]: 2024-09-04T17:29:28.338727Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 4 17:29:28.347237 waagent[1796]: 2024-09-04T17:29:28.343872Z INFO Daemon Daemon Downloading artifacts profile blob Sep 4 17:29:28.416031 waagent[1796]: 2024-09-04T17:29:28.415946Z INFO Daemon Downloaded certificate {'thumbprint': 'F4F04198661BFD62E8F62037C7F9AC17E32AD028', 'hasPrivateKey': False} Sep 4 17:29:28.421344 waagent[1796]: 2024-09-04T17:29:28.421285Z INFO Daemon Downloaded certificate {'thumbprint': 'C27C5B3521A4A22D2280B7EF4F631E0EE52BFB56', 'hasPrivateKey': True} Sep 4 17:29:28.426830 waagent[1796]: 2024-09-04T17:29:28.422844Z INFO Daemon Fetch goal state completed Sep 4 17:29:28.436329 waagent[1796]: 2024-09-04T17:29:28.436283Z INFO Daemon Daemon Starting provisioning Sep 4 17:29:28.442223 waagent[1796]: 2024-09-04T17:29:28.437364Z INFO Daemon Daemon Handle ovf-env.xml. Sep 4 17:29:28.442223 waagent[1796]: 2024-09-04T17:29:28.438091Z INFO Daemon Daemon Set hostname [ci-3975.2.1-a-27f7f2cbdf] Sep 4 17:29:28.456306 waagent[1796]: 2024-09-04T17:29:28.456243Z INFO Daemon Daemon Publish hostname [ci-3975.2.1-a-27f7f2cbdf] Sep 4 17:29:28.463491 waagent[1796]: 2024-09-04T17:29:28.457661Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 4 17:29:28.463491 waagent[1796]: 2024-09-04T17:29:28.458416Z INFO Daemon Daemon Primary interface is [eth0] Sep 4 17:29:28.484975 systemd-networkd[1340]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:29:28.484985 systemd-networkd[1340]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:29:28.485035 systemd-networkd[1340]: eth0: DHCP lease lost Sep 4 17:29:28.486373 waagent[1796]: 2024-09-04T17:29:28.486288Z INFO Daemon Daemon Create user account if not exists Sep 4 17:29:28.501031 waagent[1796]: 2024-09-04T17:29:28.487649Z INFO Daemon Daemon User core already exists, skip useradd Sep 4 17:29:28.501031 waagent[1796]: 2024-09-04T17:29:28.488362Z INFO Daemon Daemon Configure sudoer Sep 4 17:29:28.501031 waagent[1796]: 2024-09-04T17:29:28.489406Z INFO Daemon Daemon Configure sshd Sep 4 17:29:28.501031 waagent[1796]: 2024-09-04T17:29:28.490074Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 4 17:29:28.501031 waagent[1796]: 2024-09-04T17:29:28.490726Z INFO Daemon Daemon Deploy ssh public key. Sep 4 17:29:28.502283 systemd-networkd[1340]: eth0: DHCPv6 lease lost Sep 4 17:29:28.545217 systemd-networkd[1340]: eth0: DHCPv4 address 10.200.8.34/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 4 17:29:29.809250 waagent[1796]: 2024-09-04T17:29:29.809134Z INFO Daemon Daemon Provisioning complete Sep 4 17:29:29.824037 waagent[1796]: 2024-09-04T17:29:29.823970Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 4 17:29:29.830436 waagent[1796]: 2024-09-04T17:29:29.825206Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 4 17:29:29.830436 waagent[1796]: 2024-09-04T17:29:29.825927Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Sep 4 17:29:29.952264 waagent[1879]: 2024-09-04T17:29:29.952144Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Sep 4 17:29:29.952684 waagent[1879]: 2024-09-04T17:29:29.952342Z INFO ExtHandler ExtHandler OS: flatcar 3975.2.1 Sep 4 17:29:29.952684 waagent[1879]: 2024-09-04T17:29:29.952426Z INFO ExtHandler ExtHandler Python: 3.11.9 Sep 4 17:29:29.978202 waagent[1879]: 2024-09-04T17:29:29.978090Z INFO ExtHandler ExtHandler Distro: flatcar-3975.2.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 4 17:29:29.978455 waagent[1879]: 2024-09-04T17:29:29.978399Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 17:29:29.978568 waagent[1879]: 2024-09-04T17:29:29.978515Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 17:29:29.986875 waagent[1879]: 2024-09-04T17:29:29.986797Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 4 17:29:29.992178 waagent[1879]: 2024-09-04T17:29:29.992123Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.154 Sep 4 17:29:29.992642 waagent[1879]: 2024-09-04T17:29:29.992584Z INFO ExtHandler Sep 4 17:29:29.992730 waagent[1879]: 2024-09-04T17:29:29.992675Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 59bbb594-283e-4455-82b0-bb670ec78a9b eTag: 2424834199543901975 source: Fabric] Sep 4 17:29:29.993036 waagent[1879]: 2024-09-04T17:29:29.992984Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 4 17:29:29.993619 waagent[1879]: 2024-09-04T17:29:29.993561Z INFO ExtHandler Sep 4 17:29:29.993682 waagent[1879]: 2024-09-04T17:29:29.993646Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 4 17:29:29.997045 waagent[1879]: 2024-09-04T17:29:29.996995Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 4 17:29:30.081235 waagent[1879]: 2024-09-04T17:29:30.081066Z INFO ExtHandler Downloaded certificate {'thumbprint': 'F4F04198661BFD62E8F62037C7F9AC17E32AD028', 'hasPrivateKey': False} Sep 4 17:29:30.081656 waagent[1879]: 2024-09-04T17:29:30.081592Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C27C5B3521A4A22D2280B7EF4F631E0EE52BFB56', 'hasPrivateKey': True} Sep 4 17:29:30.082101 waagent[1879]: 2024-09-04T17:29:30.082050Z INFO ExtHandler Fetch goal state completed Sep 4 17:29:30.096861 waagent[1879]: 2024-09-04T17:29:30.096798Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1879 Sep 4 17:29:30.097011 waagent[1879]: 2024-09-04T17:29:30.096964Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 4 17:29:30.098578 waagent[1879]: 2024-09-04T17:29:30.098519Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3975.2.1', '', 'Flatcar Container Linux by Kinvolk'] Sep 4 17:29:30.098939 waagent[1879]: 2024-09-04T17:29:30.098893Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 4 17:29:30.176154 waagent[1879]: 2024-09-04T17:29:30.176094Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 4 17:29:30.176442 waagent[1879]: 2024-09-04T17:29:30.176382Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 4 17:29:30.184285 waagent[1879]: 2024-09-04T17:29:30.184204Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 4 17:29:30.191128 systemd[1]: Reloading requested from client PID 1894 ('systemctl') (unit waagent.service)... Sep 4 17:29:30.191146 systemd[1]: Reloading... Sep 4 17:29:30.262203 zram_generator::config[1925]: No configuration found. Sep 4 17:29:30.388137 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:29:30.469319 systemd[1]: Reloading finished in 277 ms. Sep 4 17:29:30.501109 waagent[1879]: 2024-09-04T17:29:30.497127Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Sep 4 17:29:30.506424 systemd[1]: Reloading requested from client PID 1982 ('systemctl') (unit waagent.service)... Sep 4 17:29:30.506440 systemd[1]: Reloading... Sep 4 17:29:30.587980 zram_generator::config[2018]: No configuration found. Sep 4 17:29:30.703807 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:29:30.783612 systemd[1]: Reloading finished in 276 ms. Sep 4 17:29:30.808745 waagent[1879]: 2024-09-04T17:29:30.807455Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 4 17:29:30.808745 waagent[1879]: 2024-09-04T17:29:30.807671Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 4 17:29:31.955937 waagent[1879]: 2024-09-04T17:29:31.955820Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 4 17:29:31.956810 waagent[1879]: 2024-09-04T17:29:31.956727Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Sep 4 17:29:31.957768 waagent[1879]: 2024-09-04T17:29:31.957694Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 4 17:29:31.957962 waagent[1879]: 2024-09-04T17:29:31.957870Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 17:29:31.958101 waagent[1879]: 2024-09-04T17:29:31.958049Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 17:29:31.958822 waagent[1879]: 2024-09-04T17:29:31.958767Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 17:29:31.959082 waagent[1879]: 2024-09-04T17:29:31.959024Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 4 17:29:31.959391 waagent[1879]: 2024-09-04T17:29:31.959341Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 17:29:31.959504 waagent[1879]: 2024-09-04T17:29:31.959399Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 4 17:29:31.959981 waagent[1879]: 2024-09-04T17:29:31.959920Z INFO EnvHandler ExtHandler Configure routes Sep 4 17:29:31.960146 waagent[1879]: 2024-09-04T17:29:31.960032Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 4 17:29:31.960386 waagent[1879]: 2024-09-04T17:29:31.960330Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 4 17:29:31.960818 waagent[1879]: 2024-09-04T17:29:31.960640Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 4 17:29:31.960818 waagent[1879]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 4 17:29:31.960818 waagent[1879]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Sep 4 17:29:31.960818 waagent[1879]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 4 17:29:31.960818 waagent[1879]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 4 17:29:31.960818 waagent[1879]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 4 17:29:31.960818 waagent[1879]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 4 17:29:31.961153 waagent[1879]: 2024-09-04T17:29:31.960919Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 4 17:29:31.961331 waagent[1879]: 2024-09-04T17:29:31.961272Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 4 17:29:31.961787 waagent[1879]: 2024-09-04T17:29:31.961709Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 4 17:29:31.962802 waagent[1879]: 2024-09-04T17:29:31.962609Z INFO EnvHandler ExtHandler Gateway:None Sep 4 17:29:31.962905 waagent[1879]: 2024-09-04T17:29:31.962848Z INFO EnvHandler ExtHandler Routes:None Sep 4 17:29:31.971662 waagent[1879]: 2024-09-04T17:29:31.971615Z INFO ExtHandler ExtHandler Sep 4 17:29:31.971760 waagent[1879]: 2024-09-04T17:29:31.971718Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: fb30cd42-f57a-4bb9-bb18-89e167fb127d correlation da57e1d5-be96-4b07-b84d-0b3bc6cb8400 created: 2024-09-04T17:28:13.594070Z] Sep 4 17:29:31.972127 waagent[1879]: 2024-09-04T17:29:31.972080Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 4 17:29:31.972675 waagent[1879]: 2024-09-04T17:29:31.972631Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Sep 4 17:29:32.012752 waagent[1879]: 2024-09-04T17:29:32.012688Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 363F9F20-4A82-469E-804E-D55D7CAD9F92;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Sep 4 17:29:32.024730 waagent[1879]: 2024-09-04T17:29:32.024666Z INFO MonitorHandler ExtHandler Network interfaces: Sep 4 17:29:32.024730 waagent[1879]: Executing ['ip', '-a', '-o', 'link']: Sep 4 17:29:32.024730 waagent[1879]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 4 17:29:32.024730 waagent[1879]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:65:7e:f1 brd ff:ff:ff:ff:ff:ff Sep 4 17:29:32.024730 waagent[1879]: 3: enP33337s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:65:7e:f1 brd ff:ff:ff:ff:ff:ff\ altname enP33337p0s2 Sep 4 17:29:32.024730 waagent[1879]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 4 17:29:32.024730 waagent[1879]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 4 17:29:32.024730 waagent[1879]: 2: eth0 inet 10.200.8.34/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 4 17:29:32.024730 waagent[1879]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 4 17:29:32.024730 waagent[1879]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 4 17:29:32.024730 waagent[1879]: 2: eth0 inet6 fe80::20d:3aff:fe65:7ef1/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 4 17:29:32.024730 waagent[1879]: 3: enP33337s1 inet6 fe80::20d:3aff:fe65:7ef1/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 4 17:29:32.067908 waagent[1879]: 2024-09-04T17:29:32.067838Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Sep 4 17:29:32.067908 waagent[1879]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:29:32.067908 waagent[1879]: pkts bytes target prot opt in out source destination Sep 4 17:29:32.067908 waagent[1879]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:29:32.067908 waagent[1879]: pkts bytes target prot opt in out source destination Sep 4 17:29:32.067908 waagent[1879]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:29:32.067908 waagent[1879]: pkts bytes target prot opt in out source destination Sep 4 17:29:32.067908 waagent[1879]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 4 17:29:32.067908 waagent[1879]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 4 17:29:32.067908 waagent[1879]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 4 17:29:32.071082 waagent[1879]: 2024-09-04T17:29:32.071023Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 4 17:29:32.071082 waagent[1879]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:29:32.071082 waagent[1879]: pkts bytes target prot opt in out source destination Sep 4 17:29:32.071082 waagent[1879]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:29:32.071082 waagent[1879]: pkts bytes target prot opt in out source destination Sep 4 17:29:32.071082 waagent[1879]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:29:32.071082 waagent[1879]: pkts bytes target prot opt in out source destination Sep 4 17:29:32.071082 waagent[1879]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 4 17:29:32.071082 waagent[1879]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 4 17:29:32.071082 waagent[1879]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 4 17:29:32.071476 waagent[1879]: 2024-09-04T17:29:32.071355Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 4 17:29:38.386112 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:29:38.391442 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:29:38.492684 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:29:38.503496 (kubelet)[2109]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:29:39.027306 kubelet[2109]: E0904 17:29:39.027221 2109 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:29:39.031491 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:29:39.031709 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:29:49.136255 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:29:49.142393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:29:49.236369 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:29:49.241170 (kubelet)[2125]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:29:49.508915 chronyd[1673]: Selected source PHC0 Sep 4 17:29:49.823709 kubelet[2125]: E0904 17:29:49.823540 2125 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:29:49.826599 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:29:49.826816 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:29:59.886402 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 17:29:59.893461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:30:00.002648 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:30:00.007798 (kubelet)[2142]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:30:00.525255 kubelet[2142]: E0904 17:30:00.525168 2142 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:30:00.527868 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:30:00.528080 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:30:08.614948 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Sep 4 17:30:10.636369 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 4 17:30:10.643435 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:30:10.929257 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:30:10.934037 (kubelet)[2162]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:30:11.302679 kubelet[2162]: E0904 17:30:11.302498 2162 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:30:11.305274 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:30:11.305489 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:30:11.446383 update_engine[1676]: I0904 17:30:11.446308 1676 update_attempter.cc:509] Updating boot flags... Sep 4 17:30:11.497192 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2182) Sep 4 17:30:11.605180 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2185) Sep 4 17:30:20.468138 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:30:20.474456 systemd[1]: Started sshd@0-10.200.8.34:22-10.200.16.10:40524.service - OpenSSH per-connection server daemon (10.200.16.10:40524). Sep 4 17:30:21.195310 sshd[2237]: Accepted publickey for core from 10.200.16.10 port 40524 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:30:21.197132 sshd[2237]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:21.202671 systemd-logind[1674]: New session 3 of user core. Sep 4 17:30:21.208326 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:30:21.386276 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 4 17:30:21.391405 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:30:21.620326 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:30:21.622751 (kubelet)[2249]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:30:21.749473 systemd[1]: Started sshd@1-10.200.8.34:22-10.200.16.10:40526.service - OpenSSH per-connection server daemon (10.200.16.10:40526). Sep 4 17:30:21.970669 kubelet[2249]: E0904 17:30:21.970578 2249 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:30:21.973442 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:30:21.973651 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:30:22.369758 sshd[2258]: Accepted publickey for core from 10.200.16.10 port 40526 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:30:22.371328 sshd[2258]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:22.375594 systemd-logind[1674]: New session 4 of user core. Sep 4 17:30:22.386315 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:30:22.815725 sshd[2258]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:22.820313 systemd[1]: sshd@1-10.200.8.34:22-10.200.16.10:40526.service: Deactivated successfully. Sep 4 17:30:22.822325 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:30:22.823029 systemd-logind[1674]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:30:22.823971 systemd-logind[1674]: Removed session 4. Sep 4 17:30:22.927106 systemd[1]: Started sshd@2-10.200.8.34:22-10.200.16.10:40532.service - OpenSSH per-connection server daemon (10.200.16.10:40532). Sep 4 17:30:23.548429 sshd[2266]: Accepted publickey for core from 10.200.16.10 port 40532 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:30:23.549961 sshd[2266]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:23.554677 systemd-logind[1674]: New session 5 of user core. Sep 4 17:30:23.562309 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:30:23.991395 sshd[2266]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:23.994869 systemd[1]: sshd@2-10.200.8.34:22-10.200.16.10:40532.service: Deactivated successfully. Sep 4 17:30:23.996960 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:30:23.998727 systemd-logind[1674]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:30:23.999663 systemd-logind[1674]: Removed session 5. Sep 4 17:30:24.105506 systemd[1]: Started sshd@3-10.200.8.34:22-10.200.16.10:40546.service - OpenSSH per-connection server daemon (10.200.16.10:40546). Sep 4 17:30:24.728921 sshd[2273]: Accepted publickey for core from 10.200.16.10 port 40546 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:30:24.730782 sshd[2273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:24.736482 systemd-logind[1674]: New session 6 of user core. Sep 4 17:30:24.744341 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:30:25.173727 sshd[2273]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:25.178298 systemd[1]: sshd@3-10.200.8.34:22-10.200.16.10:40546.service: Deactivated successfully. Sep 4 17:30:25.180100 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:30:25.180797 systemd-logind[1674]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:30:25.181744 systemd-logind[1674]: Removed session 6. Sep 4 17:30:25.283335 systemd[1]: Started sshd@4-10.200.8.34:22-10.200.16.10:40550.service - OpenSSH per-connection server daemon (10.200.16.10:40550). Sep 4 17:30:25.903621 sshd[2280]: Accepted publickey for core from 10.200.16.10 port 40550 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:30:25.905226 sshd[2280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:25.909970 systemd-logind[1674]: New session 7 of user core. Sep 4 17:30:25.919351 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:30:26.441225 sudo[2283]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:30:26.441671 sudo[2283]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:30:26.481679 sudo[2283]: pam_unix(sudo:session): session closed for user root Sep 4 17:30:26.583990 sshd[2280]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:26.587913 systemd[1]: sshd@4-10.200.8.34:22-10.200.16.10:40550.service: Deactivated successfully. Sep 4 17:30:26.590057 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:30:26.591648 systemd-logind[1674]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:30:26.592696 systemd-logind[1674]: Removed session 7. Sep 4 17:30:26.694602 systemd[1]: Started sshd@5-10.200.8.34:22-10.200.16.10:40560.service - OpenSSH per-connection server daemon (10.200.16.10:40560). Sep 4 17:30:27.634074 sshd[2288]: Accepted publickey for core from 10.200.16.10 port 40560 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:30:27.635978 sshd[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:27.641665 systemd-logind[1674]: New session 8 of user core. Sep 4 17:30:27.651315 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:30:27.978578 sudo[2292]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:30:27.978913 sudo[2292]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:30:27.982296 sudo[2292]: pam_unix(sudo:session): session closed for user root Sep 4 17:30:27.987205 sudo[2291]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:30:27.987530 sudo[2291]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:30:28.002499 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:30:28.003979 auditctl[2295]: No rules Sep 4 17:30:28.004372 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:30:28.004570 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:30:28.007421 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:30:28.033197 augenrules[2313]: No rules Sep 4 17:30:28.034564 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:30:28.036077 sudo[2291]: pam_unix(sudo:session): session closed for user root Sep 4 17:30:28.136390 sshd[2288]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:28.141253 systemd[1]: sshd@5-10.200.8.34:22-10.200.16.10:40560.service: Deactivated successfully. Sep 4 17:30:28.143537 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:30:28.144257 systemd-logind[1674]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:30:28.145129 systemd-logind[1674]: Removed session 8. Sep 4 17:30:28.250479 systemd[1]: Started sshd@6-10.200.8.34:22-10.200.16.10:40564.service - OpenSSH per-connection server daemon (10.200.16.10:40564). Sep 4 17:30:28.874268 sshd[2321]: Accepted publickey for core from 10.200.16.10 port 40564 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:30:28.876053 sshd[2321]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:28.880975 systemd-logind[1674]: New session 9 of user core. Sep 4 17:30:28.888337 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:30:29.217977 sudo[2324]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:30:29.218341 sudo[2324]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:30:29.919506 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:30:29.920650 (dockerd)[2333]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:30:31.068656 dockerd[2333]: time="2024-09-04T17:30:31.068582568Z" level=info msg="Starting up" Sep 4 17:30:31.175579 dockerd[2333]: time="2024-09-04T17:30:31.175530968Z" level=info msg="Loading containers: start." Sep 4 17:30:31.362408 kernel: Initializing XFRM netlink socket Sep 4 17:30:31.471130 systemd-networkd[1340]: docker0: Link UP Sep 4 17:30:31.494385 dockerd[2333]: time="2024-09-04T17:30:31.494336821Z" level=info msg="Loading containers: done." Sep 4 17:30:32.135939 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Sep 4 17:30:32.142426 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:30:32.297286 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:30:32.309580 (kubelet)[2433]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:30:32.365583 kubelet[2433]: E0904 17:30:32.365480 2433 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:30:32.367097 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3870758222-merged.mount: Deactivated successfully. Sep 4 17:30:32.371203 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:30:32.371383 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:30:34.779388 dockerd[2333]: time="2024-09-04T17:30:34.779334825Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:30:34.779914 dockerd[2333]: time="2024-09-04T17:30:34.779599531Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Sep 4 17:30:34.779914 dockerd[2333]: time="2024-09-04T17:30:34.779744834Z" level=info msg="Daemon has completed initialization" Sep 4 17:30:34.833812 dockerd[2333]: time="2024-09-04T17:30:34.832593420Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:30:34.833258 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:30:37.019421 containerd[1692]: time="2024-09-04T17:30:37.019376134Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\"" Sep 4 17:30:37.614059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount259847802.mount: Deactivated successfully. Sep 4 17:30:42.386279 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Sep 4 17:30:42.392445 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:30:42.797840 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:30:42.802660 (kubelet)[2509]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:30:43.315565 kubelet[2509]: E0904 17:30:43.315501 2509 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:30:43.318350 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:30:43.318577 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:30:47.491465 containerd[1692]: time="2024-09-04T17:30:47.491400845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:47.494128 containerd[1692]: time="2024-09-04T17:30:47.494059995Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.8: active requests=0, bytes read=35232957" Sep 4 17:30:47.496456 containerd[1692]: time="2024-09-04T17:30:47.496401940Z" level=info msg="ImageCreate event name:\"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:47.501148 containerd[1692]: time="2024-09-04T17:30:47.501095029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:47.502943 containerd[1692]: time="2024-09-04T17:30:47.502104748Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.8\" with image id \"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\", size \"35229749\" in 10.482685813s" Sep 4 17:30:47.502943 containerd[1692]: time="2024-09-04T17:30:47.502149149Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\" returns image reference \"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\"" Sep 4 17:30:47.524120 containerd[1692]: time="2024-09-04T17:30:47.524072366Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\"" Sep 4 17:30:50.954612 containerd[1692]: time="2024-09-04T17:30:50.954457702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:50.958498 containerd[1692]: time="2024-09-04T17:30:50.958410177Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.8: active requests=0, bytes read=32206214" Sep 4 17:30:50.962208 containerd[1692]: time="2024-09-04T17:30:50.962124747Z" level=info msg="ImageCreate event name:\"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:50.970228 containerd[1692]: time="2024-09-04T17:30:50.970154000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:50.973230 containerd[1692]: time="2024-09-04T17:30:50.973188158Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.8\" with image id \"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\", size \"33756152\" in 3.44905969s" Sep 4 17:30:50.973329 containerd[1692]: time="2024-09-04T17:30:50.973237059Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\" returns image reference \"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\"" Sep 4 17:30:50.997698 containerd[1692]: time="2024-09-04T17:30:50.997656123Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\"" Sep 4 17:30:52.892591 containerd[1692]: time="2024-09-04T17:30:52.892518658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:52.894355 containerd[1692]: time="2024-09-04T17:30:52.894280691Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.8: active requests=0, bytes read=17321515" Sep 4 17:30:52.901526 containerd[1692]: time="2024-09-04T17:30:52.901456828Z" level=info msg="ImageCreate event name:\"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:52.906735 containerd[1692]: time="2024-09-04T17:30:52.906662927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:52.907840 containerd[1692]: time="2024-09-04T17:30:52.907681846Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.8\" with image id \"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\", size \"18871471\" in 1.909977322s" Sep 4 17:30:52.907840 containerd[1692]: time="2024-09-04T17:30:52.907726647Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\" returns image reference \"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\"" Sep 4 17:30:52.930726 containerd[1692]: time="2024-09-04T17:30:52.930683983Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\"" Sep 4 17:30:53.386412 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Sep 4 17:30:53.391426 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:30:53.492625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:30:53.497109 (kubelet)[2579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:30:53.540844 kubelet[2579]: E0904 17:30:53.540751 2579 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:30:53.543458 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:30:53.543673 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:30:59.929619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount506067914.mount: Deactivated successfully. Sep 4 17:31:00.408448 containerd[1692]: time="2024-09-04T17:31:00.408380617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:00.410584 containerd[1692]: time="2024-09-04T17:31:00.410509858Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.8: active requests=0, bytes read=28600388" Sep 4 17:31:00.413450 containerd[1692]: time="2024-09-04T17:31:00.413384714Z" level=info msg="ImageCreate event name:\"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:00.418330 containerd[1692]: time="2024-09-04T17:31:00.418291108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:00.419412 containerd[1692]: time="2024-09-04T17:31:00.418949921Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.8\" with image id \"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\", repo tag \"registry.k8s.io/kube-proxy:v1.29.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\", size \"28599399\" in 7.488220137s" Sep 4 17:31:00.419412 containerd[1692]: time="2024-09-04T17:31:00.418996822Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\" returns image reference \"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\"" Sep 4 17:31:00.440461 containerd[1692]: time="2024-09-04T17:31:00.440423534Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Sep 4 17:31:01.730925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2221705226.mount: Deactivated successfully. Sep 4 17:31:03.636258 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Sep 4 17:31:03.641388 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:31:17.381154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:31:17.386659 (kubelet)[2614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:31:17.432905 kubelet[2614]: E0904 17:31:17.432793 2614 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:31:17.435489 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:31:17.435706 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:31:24.115418 containerd[1692]: time="2024-09-04T17:31:24.115344947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:24.177829 containerd[1692]: time="2024-09-04T17:31:24.177716315Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Sep 4 17:31:24.224299 containerd[1692]: time="2024-09-04T17:31:24.224037882Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:24.270287 containerd[1692]: time="2024-09-04T17:31:24.270122145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:24.272034 containerd[1692]: time="2024-09-04T17:31:24.271776076Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 23.83130034s" Sep 4 17:31:24.272034 containerd[1692]: time="2024-09-04T17:31:24.271831977Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Sep 4 17:31:24.294794 containerd[1692]: time="2024-09-04T17:31:24.294750806Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:31:26.132389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3441555997.mount: Deactivated successfully. Sep 4 17:31:26.571912 containerd[1692]: time="2024-09-04T17:31:26.571848027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:26.617747 containerd[1692]: time="2024-09-04T17:31:26.617651719Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Sep 4 17:31:26.622528 containerd[1692]: time="2024-09-04T17:31:26.622445602Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:26.669410 containerd[1692]: time="2024-09-04T17:31:26.669283611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:26.670984 containerd[1692]: time="2024-09-04T17:31:26.670458832Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 2.375652724s" Sep 4 17:31:26.670984 containerd[1692]: time="2024-09-04T17:31:26.670514533Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Sep 4 17:31:26.694662 containerd[1692]: time="2024-09-04T17:31:26.694610249Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 17:31:27.636037 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Sep 4 17:31:27.641405 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:31:27.743925 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:31:27.754495 (kubelet)[2680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:31:27.799748 kubelet[2680]: E0904 17:31:27.799683 2680 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:31:27.802618 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:31:27.802850 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:31:29.951766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2571520536.mount: Deactivated successfully. Sep 4 17:31:33.063385 containerd[1692]: time="2024-09-04T17:31:33.063302117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:33.066965 containerd[1692]: time="2024-09-04T17:31:33.066875379Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Sep 4 17:31:33.135591 containerd[1692]: time="2024-09-04T17:31:33.135484865Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:33.164934 containerd[1692]: time="2024-09-04T17:31:33.164819572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:33.167102 containerd[1692]: time="2024-09-04T17:31:33.166593802Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 6.471931052s" Sep 4 17:31:33.167102 containerd[1692]: time="2024-09-04T17:31:33.166653303Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Sep 4 17:31:35.986693 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:31:35.993482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:31:36.027799 systemd[1]: Reloading requested from client PID 2799 ('systemctl') (unit session-9.scope)... Sep 4 17:31:36.027822 systemd[1]: Reloading... Sep 4 17:31:36.132302 zram_generator::config[2842]: No configuration found. Sep 4 17:31:36.248710 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:31:36.329756 systemd[1]: Reloading finished in 301 ms. Sep 4 17:31:37.370397 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:31:37.370537 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:31:37.371113 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:31:37.377543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:31:41.947378 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:31:41.957478 (kubelet)[2903]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:31:42.002177 kubelet[2903]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:31:42.002177 kubelet[2903]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:31:42.002177 kubelet[2903]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:31:42.002671 kubelet[2903]: I0904 17:31:42.002215 2903 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:31:42.452628 kubelet[2903]: I0904 17:31:42.452580 2903 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 4 17:31:42.452628 kubelet[2903]: I0904 17:31:42.452613 2903 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:31:42.452936 kubelet[2903]: I0904 17:31:42.452908 2903 server.go:919] "Client rotation is on, will bootstrap in background" Sep 4 17:31:42.469017 kubelet[2903]: E0904 17:31:42.468966 2903 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:42.470000 kubelet[2903]: I0904 17:31:42.469857 2903 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:31:42.478949 kubelet[2903]: I0904 17:31:42.478919 2903 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:31:42.479244 kubelet[2903]: I0904 17:31:42.479220 2903 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:31:42.479439 kubelet[2903]: I0904 17:31:42.479411 2903 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:31:42.480097 kubelet[2903]: I0904 17:31:42.480070 2903 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:31:42.480097 kubelet[2903]: I0904 17:31:42.480099 2903 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:31:42.480270 kubelet[2903]: I0904 17:31:42.480249 2903 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:31:42.480395 kubelet[2903]: I0904 17:31:42.480381 2903 kubelet.go:396] "Attempting to sync node with API server" Sep 4 17:31:42.480453 kubelet[2903]: I0904 17:31:42.480403 2903 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:31:42.481012 kubelet[2903]: W0904 17:31:42.480928 2903 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-a-27f7f2cbdf&limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:42.481012 kubelet[2903]: E0904 17:31:42.480990 2903 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-a-27f7f2cbdf&limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:42.481221 kubelet[2903]: I0904 17:31:42.481204 2903 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:31:42.481287 kubelet[2903]: I0904 17:31:42.481253 2903 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:31:42.482618 kubelet[2903]: W0904 17:31:42.482452 2903 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:42.482618 kubelet[2903]: E0904 17:31:42.482503 2903 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:42.483191 kubelet[2903]: I0904 17:31:42.482873 2903 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:31:42.488011 kubelet[2903]: I0904 17:31:42.486715 2903 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:31:42.488011 kubelet[2903]: W0904 17:31:42.486796 2903 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:31:42.488011 kubelet[2903]: I0904 17:31:42.487729 2903 server.go:1256] "Started kubelet" Sep 4 17:31:42.490298 kubelet[2903]: I0904 17:31:42.489906 2903 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:31:42.499053 kubelet[2903]: E0904 17:31:42.499020 2903 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.34:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.34:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3975.2.1-a-27f7f2cbdf.17f21ad365fb14f8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975.2.1-a-27f7f2cbdf,UID:ci-3975.2.1-a-27f7f2cbdf,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975.2.1-a-27f7f2cbdf,},FirstTimestamp:2024-09-04 17:31:42.487696632 +0000 UTC m=+0.525989041,LastTimestamp:2024-09-04 17:31:42.487696632 +0000 UTC m=+0.525989041,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975.2.1-a-27f7f2cbdf,}" Sep 4 17:31:42.500381 kubelet[2903]: I0904 17:31:42.499279 2903 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:31:42.500381 kubelet[2903]: I0904 17:31:42.499518 2903 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:31:42.500381 kubelet[2903]: I0904 17:31:42.499826 2903 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:31:42.500538 kubelet[2903]: I0904 17:31:42.500406 2903 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:31:42.501123 kubelet[2903]: I0904 17:31:42.501105 2903 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:31:42.505437 kubelet[2903]: W0904 17:31:42.505398 2903 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:42.505585 kubelet[2903]: E0904 17:31:42.505572 2903 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:42.505914 kubelet[2903]: E0904 17:31:42.505900 2903 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:31:42.506286 kubelet[2903]: I0904 17:31:42.501427 2903 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:31:42.507091 kubelet[2903]: I0904 17:31:42.502660 2903 server.go:461] "Adding debug handlers to kubelet server" Sep 4 17:31:42.507449 kubelet[2903]: E0904 17:31:42.503826 2903 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-a-27f7f2cbdf?timeout=10s\": dial tcp 10.200.8.34:6443: connect: connection refused" interval="200ms" Sep 4 17:31:42.507449 kubelet[2903]: I0904 17:31:42.504014 2903 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:31:42.507558 kubelet[2903]: I0904 17:31:42.507526 2903 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:31:42.508797 kubelet[2903]: I0904 17:31:42.508774 2903 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:31:42.515089 kubelet[2903]: I0904 17:31:42.514967 2903 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:31:42.516387 kubelet[2903]: I0904 17:31:42.516367 2903 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:31:42.516811 kubelet[2903]: I0904 17:31:42.516489 2903 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:31:42.516811 kubelet[2903]: I0904 17:31:42.516515 2903 kubelet.go:2329] "Starting kubelet main sync loop" Sep 4 17:31:42.516811 kubelet[2903]: E0904 17:31:42.516580 2903 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:31:42.524040 kubelet[2903]: W0904 17:31:42.523996 2903 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:42.524040 kubelet[2903]: E0904 17:31:42.524041 2903 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:42.557764 kubelet[2903]: I0904 17:31:42.557733 2903 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:31:42.557764 kubelet[2903]: I0904 17:31:42.557767 2903 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:31:42.557949 kubelet[2903]: I0904 17:31:42.557786 2903 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:31:42.603225 kubelet[2903]: I0904 17:31:42.603184 2903 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:42.603667 kubelet[2903]: E0904 17:31:42.603642 2903 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.34:6443/api/v1/nodes\": dial tcp 10.200.8.34:6443: connect: connection refused" node="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:42.617232 kubelet[2903]: E0904 17:31:42.617187 2903 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:31:42.708556 kubelet[2903]: E0904 17:31:42.708420 2903 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-a-27f7f2cbdf?timeout=10s\": dial tcp 10.200.8.34:6443: connect: connection refused" interval="400ms" Sep 4 17:31:42.805911 kubelet[2903]: I0904 17:31:42.805877 2903 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:42.806388 kubelet[2903]: E0904 17:31:42.806359 2903 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.34:6443/api/v1/nodes\": dial tcp 10.200.8.34:6443: connect: connection refused" node="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:42.817548 kubelet[2903]: E0904 17:31:42.817515 2903 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:31:43.109750 kubelet[2903]: E0904 17:31:43.109599 2903 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-a-27f7f2cbdf?timeout=10s\": dial tcp 10.200.8.34:6443: connect: connection refused" interval="800ms" Sep 4 17:31:43.209425 kubelet[2903]: I0904 17:31:43.209384 2903 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:43.209830 kubelet[2903]: E0904 17:31:43.209804 2903 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.34:6443/api/v1/nodes\": dial tcp 10.200.8.34:6443: connect: connection refused" node="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:43.217902 kubelet[2903]: E0904 17:31:43.217879 2903 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:31:43.702340 kubelet[2903]: W0904 17:31:43.702287 2903 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:43.702340 kubelet[2903]: E0904 17:31:43.702346 2903 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:43.815848 kubelet[2903]: W0904 17:31:43.815771 2903 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-a-27f7f2cbdf&limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:43.815848 kubelet[2903]: E0904 17:31:43.815851 2903 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-a-27f7f2cbdf&limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:43.911064 kubelet[2903]: E0904 17:31:43.911017 2903 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-a-27f7f2cbdf?timeout=10s\": dial tcp 10.200.8.34:6443: connect: connection refused" interval="1.6s" Sep 4 17:31:44.012109 kubelet[2903]: I0904 17:31:44.011988 2903 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:44.012475 kubelet[2903]: E0904 17:31:44.012440 2903 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.34:6443/api/v1/nodes\": dial tcp 10.200.8.34:6443: connect: connection refused" node="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:44.018784 kubelet[2903]: E0904 17:31:44.018755 2903 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:31:44.056581 kubelet[2903]: W0904 17:31:44.056533 2903 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:44.056581 kubelet[2903]: E0904 17:31:44.056585 2903 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:44.067067 kubelet[2903]: W0904 17:31:44.067014 2903 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:44.067198 kubelet[2903]: E0904 17:31:44.067087 2903 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:44.522317 kubelet[2903]: E0904 17:31:44.522270 2903 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:45.671669 kubelet[2903]: E0904 17:31:45.511619 2903 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-a-27f7f2cbdf?timeout=10s\": dial tcp 10.200.8.34:6443: connect: connection refused" interval="3.2s" Sep 4 17:31:45.671669 kubelet[2903]: W0904 17:31:45.545458 2903 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-a-27f7f2cbdf&limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:45.671669 kubelet[2903]: E0904 17:31:45.545500 2903 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-a-27f7f2cbdf&limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:45.671669 kubelet[2903]: I0904 17:31:45.614897 2903 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:45.671669 kubelet[2903]: E0904 17:31:45.615307 2903 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.34:6443/api/v1/nodes\": dial tcp 10.200.8.34:6443: connect: connection refused" node="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:45.671669 kubelet[2903]: E0904 17:31:45.619384 2903 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:31:45.717884 kubelet[2903]: I0904 17:31:45.717810 2903 policy_none.go:49] "None policy: Start" Sep 4 17:31:45.718872 kubelet[2903]: I0904 17:31:45.718831 2903 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:31:45.718872 kubelet[2903]: I0904 17:31:45.718874 2903 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:31:45.727796 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:31:45.739353 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:31:45.742595 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:31:45.749882 kubelet[2903]: I0904 17:31:45.749855 2903 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:31:45.750415 kubelet[2903]: I0904 17:31:45.750205 2903 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:31:45.755270 kubelet[2903]: E0904 17:31:45.754308 2903 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3975.2.1-a-27f7f2cbdf\" not found" Sep 4 17:31:45.769751 kubelet[2903]: W0904 17:31:45.769711 2903 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:45.769751 kubelet[2903]: E0904 17:31:45.769752 2903 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:45.898247 kubelet[2903]: W0904 17:31:45.898185 2903 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:45.898247 kubelet[2903]: E0904 17:31:45.898243 2903 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:47.189881 kubelet[2903]: W0904 17:31:47.189830 2903 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:47.189881 kubelet[2903]: E0904 17:31:47.189887 2903 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:48.712882 kubelet[2903]: E0904 17:31:48.712832 2903 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-a-27f7f2cbdf?timeout=10s\": dial tcp 10.200.8.34:6443: connect: connection refused" interval="6.4s" Sep 4 17:31:48.817810 kubelet[2903]: I0904 17:31:48.817755 2903 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:48.818281 kubelet[2903]: E0904 17:31:48.818251 2903 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.34:6443/api/v1/nodes\": dial tcp 10.200.8.34:6443: connect: connection refused" node="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:48.820326 kubelet[2903]: I0904 17:31:48.820304 2903 topology_manager.go:215] "Topology Admit Handler" podUID="b86c734c4b034b92d8c8f12244d44468" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:48.821755 kubelet[2903]: I0904 17:31:48.821729 2903 topology_manager.go:215] "Topology Admit Handler" podUID="9b137c51511823484654db1615c40c7c" podNamespace="kube-system" podName="kube-scheduler-ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:48.822948 kubelet[2903]: E0904 17:31:48.822905 2903 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:48.823430 kubelet[2903]: I0904 17:31:48.823268 2903 topology_manager.go:215] "Topology Admit Handler" podUID="3c914eea95e6695aa4017aa993244d0e" podNamespace="kube-system" podName="kube-apiserver-ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:48.829613 systemd[1]: Created slice kubepods-burstable-podb86c734c4b034b92d8c8f12244d44468.slice - libcontainer container kubepods-burstable-podb86c734c4b034b92d8c8f12244d44468.slice. Sep 4 17:31:48.849587 systemd[1]: Created slice kubepods-burstable-pod9b137c51511823484654db1615c40c7c.slice - libcontainer container kubepods-burstable-pod9b137c51511823484654db1615c40c7c.slice. Sep 4 17:31:48.863905 systemd[1]: Created slice kubepods-burstable-pod3c914eea95e6695aa4017aa993244d0e.slice - libcontainer container kubepods-burstable-pod3c914eea95e6695aa4017aa993244d0e.slice. Sep 4 17:31:48.947509 kubelet[2903]: I0904 17:31:48.947453 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c914eea95e6695aa4017aa993244d0e-k8s-certs\") pod \"kube-apiserver-ci-3975.2.1-a-27f7f2cbdf\" (UID: \"3c914eea95e6695aa4017aa993244d0e\") " pod="kube-system/kube-apiserver-ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:48.947720 kubelet[2903]: I0904 17:31:48.947582 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c914eea95e6695aa4017aa993244d0e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.2.1-a-27f7f2cbdf\" (UID: \"3c914eea95e6695aa4017aa993244d0e\") " pod="kube-system/kube-apiserver-ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:48.947720 kubelet[2903]: I0904 17:31:48.947669 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b86c734c4b034b92d8c8f12244d44468-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.2.1-a-27f7f2cbdf\" (UID: \"b86c734c4b034b92d8c8f12244d44468\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:48.947720 kubelet[2903]: I0904 17:31:48.947726 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c914eea95e6695aa4017aa993244d0e-ca-certs\") pod \"kube-apiserver-ci-3975.2.1-a-27f7f2cbdf\" (UID: \"3c914eea95e6695aa4017aa993244d0e\") " pod="kube-system/kube-apiserver-ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:48.947952 kubelet[2903]: I0904 17:31:48.947791 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b86c734c4b034b92d8c8f12244d44468-kubeconfig\") pod \"kube-controller-manager-ci-3975.2.1-a-27f7f2cbdf\" (UID: \"b86c734c4b034b92d8c8f12244d44468\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:48.947952 kubelet[2903]: I0904 17:31:48.947836 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b86c734c4b034b92d8c8f12244d44468-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.2.1-a-27f7f2cbdf\" (UID: \"b86c734c4b034b92d8c8f12244d44468\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:48.947952 kubelet[2903]: I0904 17:31:48.947891 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b137c51511823484654db1615c40c7c-kubeconfig\") pod \"kube-scheduler-ci-3975.2.1-a-27f7f2cbdf\" (UID: \"9b137c51511823484654db1615c40c7c\") " pod="kube-system/kube-scheduler-ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:48.947952 kubelet[2903]: I0904 17:31:48.947925 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b86c734c4b034b92d8c8f12244d44468-ca-certs\") pod \"kube-controller-manager-ci-3975.2.1-a-27f7f2cbdf\" (UID: \"b86c734c4b034b92d8c8f12244d44468\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:48.948184 kubelet[2903]: I0904 17:31:48.947970 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b86c734c4b034b92d8c8f12244d44468-k8s-certs\") pod \"kube-controller-manager-ci-3975.2.1-a-27f7f2cbdf\" (UID: \"b86c734c4b034b92d8c8f12244d44468\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:49.013006 kubelet[2903]: W0904 17:31:49.012850 2903 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:49.013006 kubelet[2903]: E0904 17:31:49.012909 2903 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:49.148731 containerd[1692]: time="2024-09-04T17:31:49.148674117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.2.1-a-27f7f2cbdf,Uid:b86c734c4b034b92d8c8f12244d44468,Namespace:kube-system,Attempt:0,}" Sep 4 17:31:49.162587 containerd[1692]: time="2024-09-04T17:31:49.162541463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.2.1-a-27f7f2cbdf,Uid:9b137c51511823484654db1615c40c7c,Namespace:kube-system,Attempt:0,}" Sep 4 17:31:49.166590 containerd[1692]: time="2024-09-04T17:31:49.166540234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.2.1-a-27f7f2cbdf,Uid:3c914eea95e6695aa4017aa993244d0e,Namespace:kube-system,Attempt:0,}" Sep 4 17:31:49.486488 kubelet[2903]: W0904 17:31:49.486437 2903 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-a-27f7f2cbdf&limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:49.486653 kubelet[2903]: E0904 17:31:49.486513 2903 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-a-27f7f2cbdf&limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Sep 4 17:31:49.869492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1212531255.mount: Deactivated successfully. Sep 4 17:31:49.906437 containerd[1692]: time="2024-09-04T17:31:49.906380712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:31:49.910125 containerd[1692]: time="2024-09-04T17:31:49.910071473Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Sep 4 17:31:49.913572 containerd[1692]: time="2024-09-04T17:31:49.913535131Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:31:49.917642 containerd[1692]: time="2024-09-04T17:31:49.917608998Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:31:49.920791 containerd[1692]: time="2024-09-04T17:31:49.920741149Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:31:49.924931 containerd[1692]: time="2024-09-04T17:31:49.924894018Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:31:49.926743 containerd[1692]: time="2024-09-04T17:31:49.926516045Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:31:49.931510 containerd[1692]: time="2024-09-04T17:31:49.931458526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:31:49.932782 containerd[1692]: time="2024-09-04T17:31:49.932226239Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 769.570974ms" Sep 4 17:31:49.933291 containerd[1692]: time="2024-09-04T17:31:49.933258456Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 784.434337ms" Sep 4 17:31:49.942130 containerd[1692]: time="2024-09-04T17:31:49.942092902Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 775.449066ms" Sep 4 17:31:51.314377 containerd[1692]: time="2024-09-04T17:31:51.314286248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:31:51.315046 containerd[1692]: time="2024-09-04T17:31:51.314350749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:31:51.315046 containerd[1692]: time="2024-09-04T17:31:51.314373849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:31:51.315046 containerd[1692]: time="2024-09-04T17:31:51.314389050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:31:51.318811 containerd[1692]: time="2024-09-04T17:31:51.316380882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:31:51.318811 containerd[1692]: time="2024-09-04T17:31:51.316535685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:31:51.318811 containerd[1692]: time="2024-09-04T17:31:51.316561385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:31:51.318811 containerd[1692]: time="2024-09-04T17:31:51.316589186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:31:51.320914 containerd[1692]: time="2024-09-04T17:31:51.320614852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:31:51.320914 containerd[1692]: time="2024-09-04T17:31:51.320675253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:31:51.320914 containerd[1692]: time="2024-09-04T17:31:51.320712254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:31:51.320914 containerd[1692]: time="2024-09-04T17:31:51.320732554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:31:51.363079 systemd[1]: run-containerd-runc-k8s.io-07c54f152cdb003a6b2a41b3a67c60ae4ab0f7ebf73220a4cffe1453d0ce5027-runc.4lLmHc.mount: Deactivated successfully. Sep 4 17:31:51.379343 systemd[1]: Started cri-containerd-07c54f152cdb003a6b2a41b3a67c60ae4ab0f7ebf73220a4cffe1453d0ce5027.scope - libcontainer container 07c54f152cdb003a6b2a41b3a67c60ae4ab0f7ebf73220a4cffe1453d0ce5027. Sep 4 17:31:51.381641 systemd[1]: Started cri-containerd-376be3809381f0d019b5f033446217c2edc0c29e4c19573f10758fb828f5cd15.scope - libcontainer container 376be3809381f0d019b5f033446217c2edc0c29e4c19573f10758fb828f5cd15. Sep 4 17:31:51.386967 systemd[1]: Started cri-containerd-2f93043a4c7bdacc75902abb4ddd5ab2b73d3feff14a8dfdf0dfb7300937c0ef.scope - libcontainer container 2f93043a4c7bdacc75902abb4ddd5ab2b73d3feff14a8dfdf0dfb7300937c0ef. Sep 4 17:31:51.464618 containerd[1692]: time="2024-09-04T17:31:51.464463826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.2.1-a-27f7f2cbdf,Uid:9b137c51511823484654db1615c40c7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"376be3809381f0d019b5f033446217c2edc0c29e4c19573f10758fb828f5cd15\"" Sep 4 17:31:51.481364 containerd[1692]: time="2024-09-04T17:31:51.481253903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.2.1-a-27f7f2cbdf,Uid:3c914eea95e6695aa4017aa993244d0e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f93043a4c7bdacc75902abb4ddd5ab2b73d3feff14a8dfdf0dfb7300937c0ef\"" Sep 4 17:31:51.483448 containerd[1692]: time="2024-09-04T17:31:51.482763428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.2.1-a-27f7f2cbdf,Uid:b86c734c4b034b92d8c8f12244d44468,Namespace:kube-system,Attempt:0,} returns sandbox id \"07c54f152cdb003a6b2a41b3a67c60ae4ab0f7ebf73220a4cffe1453d0ce5027\"" Sep 4 17:31:51.486438 containerd[1692]: time="2024-09-04T17:31:51.486400888Z" level=info msg="CreateContainer within sandbox \"376be3809381f0d019b5f033446217c2edc0c29e4c19573f10758fb828f5cd15\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:31:51.489207 containerd[1692]: time="2024-09-04T17:31:51.489118933Z" level=info msg="CreateContainer within sandbox \"07c54f152cdb003a6b2a41b3a67c60ae4ab0f7ebf73220a4cffe1453d0ce5027\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:31:51.489482 containerd[1692]: time="2024-09-04T17:31:51.489455139Z" level=info msg="CreateContainer within sandbox \"2f93043a4c7bdacc75902abb4ddd5ab2b73d3feff14a8dfdf0dfb7300937c0ef\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:31:51.537973 kubelet[2903]: E0904 17:31:51.537937 2903 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.34:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.34:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3975.2.1-a-27f7f2cbdf.17f21ad365fb14f8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975.2.1-a-27f7f2cbdf,UID:ci-3975.2.1-a-27f7f2cbdf,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975.2.1-a-27f7f2cbdf,},FirstTimestamp:2024-09-04 17:31:42.487696632 +0000 UTC m=+0.525989041,LastTimestamp:2024-09-04 17:31:42.487696632 +0000 UTC m=+0.525989041,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975.2.1-a-27f7f2cbdf,}" Sep 4 17:31:51.563267 containerd[1692]: time="2024-09-04T17:31:51.563223756Z" level=info msg="CreateContainer within sandbox \"376be3809381f0d019b5f033446217c2edc0c29e4c19573f10758fb828f5cd15\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d511d29a4a16f2b034bb131009db57860196b879b64c9da14f1767c63e442252\"" Sep 4 17:31:51.574559 containerd[1692]: time="2024-09-04T17:31:51.574451341Z" level=info msg="CreateContainer within sandbox \"07c54f152cdb003a6b2a41b3a67c60ae4ab0f7ebf73220a4cffe1453d0ce5027\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"34c1dba978544551c74a08bdfe416d1a3e1522cff8cdcd5c4c5fea88da0e45c1\"" Sep 4 17:31:51.574743 containerd[1692]: time="2024-09-04T17:31:51.574710446Z" level=info msg="StartContainer for \"d511d29a4a16f2b034bb131009db57860196b879b64c9da14f1767c63e442252\"" Sep 4 17:31:51.577601 containerd[1692]: time="2024-09-04T17:31:51.577555293Z" level=info msg="StartContainer for \"34c1dba978544551c74a08bdfe416d1a3e1522cff8cdcd5c4c5fea88da0e45c1\"" Sep 4 17:31:51.579627 containerd[1692]: time="2024-09-04T17:31:51.579595826Z" level=info msg="CreateContainer within sandbox \"2f93043a4c7bdacc75902abb4ddd5ab2b73d3feff14a8dfdf0dfb7300937c0ef\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e1f71dd617de22ecf4f1520d343d9355a2ac3bf03617ba5bd656a41cd79288b4\"" Sep 4 17:31:51.581736 containerd[1692]: time="2024-09-04T17:31:51.580344739Z" level=info msg="StartContainer for \"e1f71dd617de22ecf4f1520d343d9355a2ac3bf03617ba5bd656a41cd79288b4\"" Sep 4 17:31:51.619530 systemd[1]: Started cri-containerd-d511d29a4a16f2b034bb131009db57860196b879b64c9da14f1767c63e442252.scope - libcontainer container d511d29a4a16f2b034bb131009db57860196b879b64c9da14f1767c63e442252. Sep 4 17:31:51.636351 systemd[1]: Started cri-containerd-34c1dba978544551c74a08bdfe416d1a3e1522cff8cdcd5c4c5fea88da0e45c1.scope - libcontainer container 34c1dba978544551c74a08bdfe416d1a3e1522cff8cdcd5c4c5fea88da0e45c1. Sep 4 17:31:51.645340 systemd[1]: Started cri-containerd-e1f71dd617de22ecf4f1520d343d9355a2ac3bf03617ba5bd656a41cd79288b4.scope - libcontainer container e1f71dd617de22ecf4f1520d343d9355a2ac3bf03617ba5bd656a41cd79288b4. Sep 4 17:31:51.728506 containerd[1692]: time="2024-09-04T17:31:51.728457083Z" level=info msg="StartContainer for \"d511d29a4a16f2b034bb131009db57860196b879b64c9da14f1767c63e442252\" returns successfully" Sep 4 17:31:51.785702 containerd[1692]: time="2024-09-04T17:31:51.785634527Z" level=info msg="StartContainer for \"34c1dba978544551c74a08bdfe416d1a3e1522cff8cdcd5c4c5fea88da0e45c1\" returns successfully" Sep 4 17:31:51.785874 containerd[1692]: time="2024-09-04T17:31:51.785782429Z" level=info msg="StartContainer for \"e1f71dd617de22ecf4f1520d343d9355a2ac3bf03617ba5bd656a41cd79288b4\" returns successfully" Sep 4 17:31:54.114757 kubelet[2903]: E0904 17:31:54.114621 2903 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3975.2.1-a-27f7f2cbdf" not found Sep 4 17:31:54.568958 kubelet[2903]: E0904 17:31:54.568909 2903 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3975.2.1-a-27f7f2cbdf" not found Sep 4 17:31:55.079354 kubelet[2903]: E0904 17:31:55.079307 2903 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3975.2.1-a-27f7f2cbdf" not found Sep 4 17:31:55.117617 kubelet[2903]: E0904 17:31:55.117551 2903 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3975.2.1-a-27f7f2cbdf\" not found" node="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:55.220514 kubelet[2903]: I0904 17:31:55.220455 2903 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:55.224677 kubelet[2903]: I0904 17:31:55.224647 2903 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:55.231680 kubelet[2903]: E0904 17:31:55.231655 2903 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-27f7f2cbdf\" not found" Sep 4 17:31:55.332898 kubelet[2903]: E0904 17:31:55.332244 2903 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-27f7f2cbdf\" not found" Sep 4 17:31:55.433284 kubelet[2903]: E0904 17:31:55.433229 2903 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-27f7f2cbdf\" not found" Sep 4 17:31:55.533540 kubelet[2903]: E0904 17:31:55.533489 2903 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-27f7f2cbdf\" not found" Sep 4 17:31:55.634614 kubelet[2903]: E0904 17:31:55.634464 2903 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-27f7f2cbdf\" not found" Sep 4 17:31:55.734905 kubelet[2903]: E0904 17:31:55.734851 2903 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-27f7f2cbdf\" not found" Sep 4 17:31:55.755414 kubelet[2903]: E0904 17:31:55.755384 2903 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3975.2.1-a-27f7f2cbdf\" not found" Sep 4 17:31:55.835187 kubelet[2903]: E0904 17:31:55.835129 2903 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-27f7f2cbdf\" not found" Sep 4 17:31:55.936000 kubelet[2903]: E0904 17:31:55.935945 2903 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-27f7f2cbdf\" not found" Sep 4 17:31:56.036829 kubelet[2903]: E0904 17:31:56.036776 2903 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-27f7f2cbdf\" not found" Sep 4 17:31:56.137432 kubelet[2903]: E0904 17:31:56.137388 2903 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-27f7f2cbdf\" not found" Sep 4 17:31:56.238416 kubelet[2903]: E0904 17:31:56.238226 2903 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-27f7f2cbdf\" not found" Sep 4 17:31:56.338848 kubelet[2903]: E0904 17:31:56.338788 2903 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-27f7f2cbdf\" not found" Sep 4 17:31:56.439334 kubelet[2903]: E0904 17:31:56.439280 2903 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-27f7f2cbdf\" not found" Sep 4 17:31:56.540477 kubelet[2903]: E0904 17:31:56.540343 2903 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-27f7f2cbdf\" not found" Sep 4 17:31:56.641336 kubelet[2903]: E0904 17:31:56.641240 2903 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-27f7f2cbdf\" not found" Sep 4 17:31:56.680033 systemd[1]: Reloading requested from client PID 3178 ('systemctl') (unit session-9.scope)... Sep 4 17:31:56.680049 systemd[1]: Reloading... Sep 4 17:31:56.742567 kubelet[2903]: E0904 17:31:56.742525 2903 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-27f7f2cbdf\" not found" Sep 4 17:31:56.753258 zram_generator::config[3212]: No configuration found. Sep 4 17:31:56.843122 kubelet[2903]: E0904 17:31:56.842934 2903 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-27f7f2cbdf\" not found" Sep 4 17:31:56.889683 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:31:56.943459 kubelet[2903]: E0904 17:31:56.943407 2903 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-27f7f2cbdf\" not found" Sep 4 17:31:56.984050 systemd[1]: Reloading finished in 303 ms. Sep 4 17:31:57.032073 kubelet[2903]: I0904 17:31:57.031990 2903 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:31:57.032292 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:31:57.040544 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:31:57.040765 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:31:57.049482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:31:57.183865 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:31:57.194526 (kubelet)[3282]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:31:57.728763 kubelet[3282]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:31:57.729817 kubelet[3282]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:31:57.729817 kubelet[3282]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:31:57.729817 kubelet[3282]: I0904 17:31:57.729402 3282 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:31:57.734784 kubelet[3282]: I0904 17:31:57.734756 3282 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 4 17:31:57.734784 kubelet[3282]: I0904 17:31:57.734778 3282 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:31:57.735057 kubelet[3282]: I0904 17:31:57.735030 3282 server.go:919] "Client rotation is on, will bootstrap in background" Sep 4 17:31:57.736494 kubelet[3282]: I0904 17:31:57.736470 3282 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:31:57.738435 kubelet[3282]: I0904 17:31:57.738285 3282 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:31:57.744860 kubelet[3282]: I0904 17:31:57.744824 3282 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:31:57.745152 kubelet[3282]: I0904 17:31:57.745133 3282 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:31:57.745365 kubelet[3282]: I0904 17:31:57.745335 3282 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:31:57.745498 kubelet[3282]: I0904 17:31:57.745376 3282 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:31:57.745498 kubelet[3282]: I0904 17:31:57.745399 3282 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:31:57.745498 kubelet[3282]: I0904 17:31:57.745452 3282 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:31:57.745616 kubelet[3282]: I0904 17:31:57.745561 3282 kubelet.go:396] "Attempting to sync node with API server" Sep 4 17:31:57.745616 kubelet[3282]: I0904 17:31:57.745582 3282 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:31:57.745616 kubelet[3282]: I0904 17:31:57.745617 3282 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:31:57.747631 kubelet[3282]: I0904 17:31:57.745636 3282 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:31:57.752183 kubelet[3282]: I0904 17:31:57.750546 3282 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:31:57.752183 kubelet[3282]: I0904 17:31:57.750802 3282 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:31:57.752183 kubelet[3282]: I0904 17:31:57.751287 3282 server.go:1256] "Started kubelet" Sep 4 17:31:57.754724 kubelet[3282]: I0904 17:31:57.754699 3282 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:31:57.757207 kubelet[3282]: I0904 17:31:57.757180 3282 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:31:57.759084 kubelet[3282]: I0904 17:31:57.759059 3282 server.go:461] "Adding debug handlers to kubelet server" Sep 4 17:31:57.761668 kubelet[3282]: I0904 17:31:57.761639 3282 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:31:57.761882 kubelet[3282]: I0904 17:31:57.761859 3282 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:31:57.767244 kubelet[3282]: I0904 17:31:57.767224 3282 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:31:57.769022 kubelet[3282]: I0904 17:31:57.769003 3282 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:31:57.769308 kubelet[3282]: I0904 17:31:57.769291 3282 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:31:57.773658 kubelet[3282]: I0904 17:31:57.773638 3282 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:31:57.775306 kubelet[3282]: I0904 17:31:57.775273 3282 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:31:57.775306 kubelet[3282]: I0904 17:31:57.775307 3282 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:31:57.775423 kubelet[3282]: I0904 17:31:57.775328 3282 kubelet.go:2329] "Starting kubelet main sync loop" Sep 4 17:31:57.775423 kubelet[3282]: E0904 17:31:57.775381 3282 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:31:57.775884 kubelet[3282]: I0904 17:31:57.775856 3282 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:31:57.786329 kubelet[3282]: I0904 17:31:57.786306 3282 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:31:57.786329 kubelet[3282]: I0904 17:31:57.786325 3282 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:31:57.839621 kubelet[3282]: I0904 17:31:57.839588 3282 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:31:57.839621 kubelet[3282]: I0904 17:31:57.839612 3282 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:31:57.839621 kubelet[3282]: I0904 17:31:57.839633 3282 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:31:57.839872 kubelet[3282]: I0904 17:31:57.839863 3282 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:31:57.839919 kubelet[3282]: I0904 17:31:57.839892 3282 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:31:57.839919 kubelet[3282]: I0904 17:31:57.839902 3282 policy_none.go:49] "None policy: Start" Sep 4 17:31:57.840824 kubelet[3282]: I0904 17:31:57.840563 3282 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:31:57.840824 kubelet[3282]: I0904 17:31:57.840592 3282 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:31:57.840824 kubelet[3282]: I0904 17:31:57.840740 3282 state_mem.go:75] "Updated machine memory state" Sep 4 17:31:57.845621 kubelet[3282]: I0904 17:31:57.845135 3282 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:31:57.845621 kubelet[3282]: I0904 17:31:57.845428 3282 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:31:57.871837 kubelet[3282]: I0904 17:31:57.871817 3282 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:57.876422 kubelet[3282]: I0904 17:31:57.876397 3282 topology_manager.go:215] "Topology Admit Handler" podUID="3c914eea95e6695aa4017aa993244d0e" podNamespace="kube-system" podName="kube-apiserver-ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:57.876541 kubelet[3282]: I0904 17:31:57.876494 3282 topology_manager.go:215] "Topology Admit Handler" podUID="b86c734c4b034b92d8c8f12244d44468" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:57.876593 kubelet[3282]: I0904 17:31:57.876545 3282 topology_manager.go:215] "Topology Admit Handler" podUID="9b137c51511823484654db1615c40c7c" podNamespace="kube-system" podName="kube-scheduler-ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:57.888361 kubelet[3282]: W0904 17:31:57.888249 3282 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:31:57.889260 kubelet[3282]: W0904 17:31:57.889109 3282 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:31:57.889260 kubelet[3282]: I0904 17:31:57.889150 3282 kubelet_node_status.go:112] "Node was previously registered" node="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:57.889567 kubelet[3282]: I0904 17:31:57.889413 3282 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:57.889631 kubelet[3282]: W0904 17:31:57.889586 3282 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:31:57.970692 kubelet[3282]: I0904 17:31:57.970626 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c914eea95e6695aa4017aa993244d0e-ca-certs\") pod \"kube-apiserver-ci-3975.2.1-a-27f7f2cbdf\" (UID: \"3c914eea95e6695aa4017aa993244d0e\") " pod="kube-system/kube-apiserver-ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:57.970692 kubelet[3282]: I0904 17:31:57.970687 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b86c734c4b034b92d8c8f12244d44468-kubeconfig\") pod \"kube-controller-manager-ci-3975.2.1-a-27f7f2cbdf\" (UID: \"b86c734c4b034b92d8c8f12244d44468\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:57.971141 kubelet[3282]: I0904 17:31:57.970741 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b86c734c4b034b92d8c8f12244d44468-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.2.1-a-27f7f2cbdf\" (UID: \"b86c734c4b034b92d8c8f12244d44468\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:57.971141 kubelet[3282]: I0904 17:31:57.970783 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b137c51511823484654db1615c40c7c-kubeconfig\") pod \"kube-scheduler-ci-3975.2.1-a-27f7f2cbdf\" (UID: \"9b137c51511823484654db1615c40c7c\") " pod="kube-system/kube-scheduler-ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:57.971141 kubelet[3282]: I0904 17:31:57.970836 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b86c734c4b034b92d8c8f12244d44468-k8s-certs\") pod \"kube-controller-manager-ci-3975.2.1-a-27f7f2cbdf\" (UID: \"b86c734c4b034b92d8c8f12244d44468\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:57.971141 kubelet[3282]: I0904 17:31:57.970866 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c914eea95e6695aa4017aa993244d0e-k8s-certs\") pod \"kube-apiserver-ci-3975.2.1-a-27f7f2cbdf\" (UID: \"3c914eea95e6695aa4017aa993244d0e\") " pod="kube-system/kube-apiserver-ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:57.971141 kubelet[3282]: I0904 17:31:57.970904 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c914eea95e6695aa4017aa993244d0e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.2.1-a-27f7f2cbdf\" (UID: \"3c914eea95e6695aa4017aa993244d0e\") " pod="kube-system/kube-apiserver-ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:57.971346 kubelet[3282]: I0904 17:31:57.970964 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b86c734c4b034b92d8c8f12244d44468-ca-certs\") pod \"kube-controller-manager-ci-3975.2.1-a-27f7f2cbdf\" (UID: \"b86c734c4b034b92d8c8f12244d44468\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:57.971346 kubelet[3282]: I0904 17:31:57.971006 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b86c734c4b034b92d8c8f12244d44468-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.2.1-a-27f7f2cbdf\" (UID: \"b86c734c4b034b92d8c8f12244d44468\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:58.746775 kubelet[3282]: I0904 17:31:58.746723 3282 apiserver.go:52] "Watching apiserver" Sep 4 17:31:58.770043 kubelet[3282]: I0904 17:31:58.769988 3282 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:31:58.832116 kubelet[3282]: W0904 17:31:58.831322 3282 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:31:58.832116 kubelet[3282]: E0904 17:31:58.831404 3282 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3975.2.1-a-27f7f2cbdf\" already exists" pod="kube-system/kube-apiserver-ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:31:58.847985 kubelet[3282]: I0904 17:31:58.847723 3282 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3975.2.1-a-27f7f2cbdf" podStartSLOduration=1.847664405 podStartE2EDuration="1.847664405s" podCreationTimestamp="2024-09-04 17:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:31:58.840497575 +0000 UTC m=+1.640496093" watchObservedRunningTime="2024-09-04 17:31:58.847664405 +0000 UTC m=+1.647662923" Sep 4 17:31:58.847985 kubelet[3282]: I0904 17:31:58.847863 3282 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3975.2.1-a-27f7f2cbdf" podStartSLOduration=1.8478320080000001 podStartE2EDuration="1.847832008s" podCreationTimestamp="2024-09-04 17:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:31:58.847493802 +0000 UTC m=+1.647492420" watchObservedRunningTime="2024-09-04 17:31:58.847832008 +0000 UTC m=+1.647830626" Sep 4 17:31:58.871771 kubelet[3282]: I0904 17:31:58.871510 3282 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3975.2.1-a-27f7f2cbdf" podStartSLOduration=1.871367535 podStartE2EDuration="1.871367535s" podCreationTimestamp="2024-09-04 17:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:31:58.86337939 +0000 UTC m=+1.663377908" watchObservedRunningTime="2024-09-04 17:31:58.871367535 +0000 UTC m=+1.671366153" Sep 4 17:32:09.441005 update_engine[1676]: I0904 17:32:09.440948 1676 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 4 17:32:09.441005 update_engine[1676]: I0904 17:32:09.440999 1676 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 4 17:32:09.441716 update_engine[1676]: I0904 17:32:09.441273 1676 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 4 17:32:09.441967 update_engine[1676]: I0904 17:32:09.441932 1676 omaha_request_params.cc:62] Current group set to stable Sep 4 17:32:09.442321 update_engine[1676]: I0904 17:32:09.442096 1676 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 4 17:32:09.442321 update_engine[1676]: I0904 17:32:09.442116 1676 update_attempter.cc:643] Scheduling an action processor start. Sep 4 17:32:09.442321 update_engine[1676]: I0904 17:32:09.442135 1676 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 4 17:32:09.442321 update_engine[1676]: I0904 17:32:09.442198 1676 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 4 17:32:09.442321 update_engine[1676]: I0904 17:32:09.442286 1676 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 4 17:32:09.442321 update_engine[1676]: I0904 17:32:09.442296 1676 omaha_request_action.cc:272] Request: Sep 4 17:32:09.442321 update_engine[1676]: Sep 4 17:32:09.442321 update_engine[1676]: Sep 4 17:32:09.442321 update_engine[1676]: Sep 4 17:32:09.442321 update_engine[1676]: Sep 4 17:32:09.442321 update_engine[1676]: Sep 4 17:32:09.442321 update_engine[1676]: Sep 4 17:32:09.442321 update_engine[1676]: Sep 4 17:32:09.442321 update_engine[1676]: Sep 4 17:32:09.442321 update_engine[1676]: I0904 17:32:09.442301 1676 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 17:32:09.443627 locksmithd[1728]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 4 17:32:09.444129 update_engine[1676]: I0904 17:32:09.444103 1676 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 17:32:09.444626 update_engine[1676]: I0904 17:32:09.444588 1676 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 17:32:09.470592 update_engine[1676]: E0904 17:32:09.470556 1676 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 17:32:09.470707 update_engine[1676]: I0904 17:32:09.470642 1676 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 4 17:32:10.225859 kubelet[3282]: I0904 17:32:10.225821 3282 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:32:10.226390 containerd[1692]: time="2024-09-04T17:32:10.226281592Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:32:10.226740 kubelet[3282]: I0904 17:32:10.226533 3282 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:32:10.595667 sudo[2324]: pam_unix(sudo:session): session closed for user root Sep 4 17:32:10.697727 sshd[2321]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:10.702875 systemd[1]: sshd@6-10.200.8.34:22-10.200.16.10:40564.service: Deactivated successfully. Sep 4 17:32:10.705144 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:32:10.705371 systemd[1]: session-9.scope: Consumed 4.386s CPU time, 138.5M memory peak, 0B memory swap peak. Sep 4 17:32:10.705985 systemd-logind[1674]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:32:10.707441 systemd-logind[1674]: Removed session 9. Sep 4 17:32:10.933326 kubelet[3282]: I0904 17:32:10.933270 3282 topology_manager.go:215] "Topology Admit Handler" podUID="b646b5f9-58b3-4ed0-9a08-e23660b117c3" podNamespace="kube-system" podName="kube-proxy-gd2kx" Sep 4 17:32:10.946453 systemd[1]: Created slice kubepods-besteffort-podb646b5f9_58b3_4ed0_9a08_e23660b117c3.slice - libcontainer container kubepods-besteffort-podb646b5f9_58b3_4ed0_9a08_e23660b117c3.slice. Sep 4 17:32:11.044875 kubelet[3282]: I0904 17:32:11.044704 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b646b5f9-58b3-4ed0-9a08-e23660b117c3-lib-modules\") pod \"kube-proxy-gd2kx\" (UID: \"b646b5f9-58b3-4ed0-9a08-e23660b117c3\") " pod="kube-system/kube-proxy-gd2kx" Sep 4 17:32:11.044875 kubelet[3282]: I0904 17:32:11.044770 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq4rw\" (UniqueName: \"kubernetes.io/projected/b646b5f9-58b3-4ed0-9a08-e23660b117c3-kube-api-access-wq4rw\") pod \"kube-proxy-gd2kx\" (UID: \"b646b5f9-58b3-4ed0-9a08-e23660b117c3\") " pod="kube-system/kube-proxy-gd2kx" Sep 4 17:32:11.044875 kubelet[3282]: I0904 17:32:11.044801 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b646b5f9-58b3-4ed0-9a08-e23660b117c3-kube-proxy\") pod \"kube-proxy-gd2kx\" (UID: \"b646b5f9-58b3-4ed0-9a08-e23660b117c3\") " pod="kube-system/kube-proxy-gd2kx" Sep 4 17:32:11.044875 kubelet[3282]: I0904 17:32:11.044834 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b646b5f9-58b3-4ed0-9a08-e23660b117c3-xtables-lock\") pod \"kube-proxy-gd2kx\" (UID: \"b646b5f9-58b3-4ed0-9a08-e23660b117c3\") " pod="kube-system/kube-proxy-gd2kx" Sep 4 17:32:11.255657 containerd[1692]: time="2024-09-04T17:32:11.255502741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gd2kx,Uid:b646b5f9-58b3-4ed0-9a08-e23660b117c3,Namespace:kube-system,Attempt:0,}" Sep 4 17:32:11.318067 kubelet[3282]: I0904 17:32:11.316744 3282 topology_manager.go:215] "Topology Admit Handler" podUID="c5bf8dd3-07e8-4b26-8122-7790eea5dc3b" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-9r4h4" Sep 4 17:32:11.328967 systemd[1]: Created slice kubepods-besteffort-podc5bf8dd3_07e8_4b26_8122_7790eea5dc3b.slice - libcontainer container kubepods-besteffort-podc5bf8dd3_07e8_4b26_8122_7790eea5dc3b.slice. Sep 4 17:32:11.447995 kubelet[3282]: I0904 17:32:11.447901 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c5bf8dd3-07e8-4b26-8122-7790eea5dc3b-var-lib-calico\") pod \"tigera-operator-5d56685c77-9r4h4\" (UID: \"c5bf8dd3-07e8-4b26-8122-7790eea5dc3b\") " pod="tigera-operator/tigera-operator-5d56685c77-9r4h4" Sep 4 17:32:11.447995 kubelet[3282]: I0904 17:32:11.447974 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdv8n\" (UniqueName: \"kubernetes.io/projected/c5bf8dd3-07e8-4b26-8122-7790eea5dc3b-kube-api-access-xdv8n\") pod \"tigera-operator-5d56685c77-9r4h4\" (UID: \"c5bf8dd3-07e8-4b26-8122-7790eea5dc3b\") " pod="tigera-operator/tigera-operator-5d56685c77-9r4h4" Sep 4 17:32:11.488707 containerd[1692]: time="2024-09-04T17:32:11.488613816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:32:11.488707 containerd[1692]: time="2024-09-04T17:32:11.488658516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:11.488707 containerd[1692]: time="2024-09-04T17:32:11.488676117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:32:11.489007 containerd[1692]: time="2024-09-04T17:32:11.488689217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:11.513321 systemd[1]: Started cri-containerd-0d7a69ce52fd90ccd9f8f7038b7bf0b37380a749934f3204410a84a2d2deda78.scope - libcontainer container 0d7a69ce52fd90ccd9f8f7038b7bf0b37380a749934f3204410a84a2d2deda78. Sep 4 17:32:11.536322 containerd[1692]: time="2024-09-04T17:32:11.536200127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gd2kx,Uid:b646b5f9-58b3-4ed0-9a08-e23660b117c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d7a69ce52fd90ccd9f8f7038b7bf0b37380a749934f3204410a84a2d2deda78\"" Sep 4 17:32:11.539546 containerd[1692]: time="2024-09-04T17:32:11.539492483Z" level=info msg="CreateContainer within sandbox \"0d7a69ce52fd90ccd9f8f7038b7bf0b37380a749934f3204410a84a2d2deda78\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:32:11.632005 containerd[1692]: time="2024-09-04T17:32:11.631953160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-9r4h4,Uid:c5bf8dd3-07e8-4b26-8122-7790eea5dc3b,Namespace:tigera-operator,Attempt:0,}" Sep 4 17:32:12.164028 containerd[1692]: time="2024-09-04T17:32:12.163858829Z" level=info msg="CreateContainer within sandbox \"0d7a69ce52fd90ccd9f8f7038b7bf0b37380a749934f3204410a84a2d2deda78\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"221a70422fa5f84dbd71a07df67c820dcaa436699cd9f83ae72e75eb34f3ce2c\"" Sep 4 17:32:12.164964 containerd[1692]: time="2024-09-04T17:32:12.164791945Z" level=info msg="StartContainer for \"221a70422fa5f84dbd71a07df67c820dcaa436699cd9f83ae72e75eb34f3ce2c\"" Sep 4 17:32:12.205346 systemd[1]: Started cri-containerd-221a70422fa5f84dbd71a07df67c820dcaa436699cd9f83ae72e75eb34f3ce2c.scope - libcontainer container 221a70422fa5f84dbd71a07df67c820dcaa436699cd9f83ae72e75eb34f3ce2c. Sep 4 17:32:12.325272 containerd[1692]: time="2024-09-04T17:32:12.325197280Z" level=info msg="StartContainer for \"221a70422fa5f84dbd71a07df67c820dcaa436699cd9f83ae72e75eb34f3ce2c\" returns successfully" Sep 4 17:32:12.433711 containerd[1692]: time="2024-09-04T17:32:12.433293723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:32:12.433711 containerd[1692]: time="2024-09-04T17:32:12.433375324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:12.433711 containerd[1692]: time="2024-09-04T17:32:12.433409225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:32:12.433711 containerd[1692]: time="2024-09-04T17:32:12.433429825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:12.452373 systemd[1]: Started cri-containerd-1f317059eb1889eb4fee72189d46922d18595072d7d89354b7fd3f4f69ca89f2.scope - libcontainer container 1f317059eb1889eb4fee72189d46922d18595072d7d89354b7fd3f4f69ca89f2. Sep 4 17:32:12.514406 containerd[1692]: time="2024-09-04T17:32:12.513506990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-9r4h4,Uid:c5bf8dd3-07e8-4b26-8122-7790eea5dc3b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1f317059eb1889eb4fee72189d46922d18595072d7d89354b7fd3f4f69ca89f2\"" Sep 4 17:32:12.518414 containerd[1692]: time="2024-09-04T17:32:12.517009850Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 17:32:14.513051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3340408259.mount: Deactivated successfully. Sep 4 17:32:15.218417 containerd[1692]: time="2024-09-04T17:32:15.218355442Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:15.220697 containerd[1692]: time="2024-09-04T17:32:15.220638280Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136509" Sep 4 17:32:15.224205 containerd[1692]: time="2024-09-04T17:32:15.224119239Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:15.228291 containerd[1692]: time="2024-09-04T17:32:15.228233608Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:15.229192 containerd[1692]: time="2024-09-04T17:32:15.228951020Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.711901369s" Sep 4 17:32:15.229192 containerd[1692]: time="2024-09-04T17:32:15.228993920Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Sep 4 17:32:15.232752 containerd[1692]: time="2024-09-04T17:32:15.232723083Z" level=info msg="CreateContainer within sandbox \"1f317059eb1889eb4fee72189d46922d18595072d7d89354b7fd3f4f69ca89f2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 17:32:15.270957 containerd[1692]: time="2024-09-04T17:32:15.270901225Z" level=info msg="CreateContainer within sandbox \"1f317059eb1889eb4fee72189d46922d18595072d7d89354b7fd3f4f69ca89f2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a0ee9aa780633ef76399daa532ee341171b947a9af8f9c87635012087181032e\"" Sep 4 17:32:15.272290 containerd[1692]: time="2024-09-04T17:32:15.271604936Z" level=info msg="StartContainer for \"a0ee9aa780633ef76399daa532ee341171b947a9af8f9c87635012087181032e\"" Sep 4 17:32:15.302536 systemd[1]: Started cri-containerd-a0ee9aa780633ef76399daa532ee341171b947a9af8f9c87635012087181032e.scope - libcontainer container a0ee9aa780633ef76399daa532ee341171b947a9af8f9c87635012087181032e. Sep 4 17:32:15.331765 containerd[1692]: time="2024-09-04T17:32:15.331614545Z" level=info msg="StartContainer for \"a0ee9aa780633ef76399daa532ee341171b947a9af8f9c87635012087181032e\" returns successfully" Sep 4 17:32:15.869704 kubelet[3282]: I0904 17:32:15.869648 3282 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-gd2kx" podStartSLOduration=5.869594985 podStartE2EDuration="5.869594985s" podCreationTimestamp="2024-09-04 17:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:32:12.867810831 +0000 UTC m=+15.667809449" watchObservedRunningTime="2024-09-04 17:32:15.869594985 +0000 UTC m=+18.669593503" Sep 4 17:32:17.803506 kubelet[3282]: I0904 17:32:17.803337 3282 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-9r4h4" podStartSLOduration=4.088738864 podStartE2EDuration="6.803280878s" podCreationTimestamp="2024-09-04 17:32:11 +0000 UTC" firstStartedPulling="2024-09-04 17:32:12.515059517 +0000 UTC m=+15.315058035" lastFinishedPulling="2024-09-04 17:32:15.229601531 +0000 UTC m=+18.029600049" observedRunningTime="2024-09-04 17:32:15.870372898 +0000 UTC m=+18.670371416" watchObservedRunningTime="2024-09-04 17:32:17.803280878 +0000 UTC m=+20.603279396" Sep 4 17:32:18.322352 kubelet[3282]: I0904 17:32:18.321355 3282 topology_manager.go:215] "Topology Admit Handler" podUID="d7dd02c0-9911-4ca7-b60d-374ea42882ca" podNamespace="calico-system" podName="calico-typha-889bf75-57j8t" Sep 4 17:32:18.336978 systemd[1]: Created slice kubepods-besteffort-podd7dd02c0_9911_4ca7_b60d_374ea42882ca.slice - libcontainer container kubepods-besteffort-podd7dd02c0_9911_4ca7_b60d_374ea42882ca.slice. Sep 4 17:32:18.389607 kubelet[3282]: I0904 17:32:18.389558 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d7dd02c0-9911-4ca7-b60d-374ea42882ca-typha-certs\") pod \"calico-typha-889bf75-57j8t\" (UID: \"d7dd02c0-9911-4ca7-b60d-374ea42882ca\") " pod="calico-system/calico-typha-889bf75-57j8t" Sep 4 17:32:18.389607 kubelet[3282]: I0904 17:32:18.389622 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7dd02c0-9911-4ca7-b60d-374ea42882ca-tigera-ca-bundle\") pod \"calico-typha-889bf75-57j8t\" (UID: \"d7dd02c0-9911-4ca7-b60d-374ea42882ca\") " pod="calico-system/calico-typha-889bf75-57j8t" Sep 4 17:32:18.389873 kubelet[3282]: I0904 17:32:18.389651 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds9ll\" (UniqueName: \"kubernetes.io/projected/d7dd02c0-9911-4ca7-b60d-374ea42882ca-kube-api-access-ds9ll\") pod \"calico-typha-889bf75-57j8t\" (UID: \"d7dd02c0-9911-4ca7-b60d-374ea42882ca\") " pod="calico-system/calico-typha-889bf75-57j8t" Sep 4 17:32:18.495480 kubelet[3282]: I0904 17:32:18.495279 3282 topology_manager.go:215] "Topology Admit Handler" podUID="659cab9a-bfef-42af-b58d-20e02f626778" podNamespace="calico-system" podName="calico-node-kxqzg" Sep 4 17:32:18.514879 systemd[1]: Created slice kubepods-besteffort-pod659cab9a_bfef_42af_b58d_20e02f626778.slice - libcontainer container kubepods-besteffort-pod659cab9a_bfef_42af_b58d_20e02f626778.slice. Sep 4 17:32:18.591222 kubelet[3282]: I0904 17:32:18.590646 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/659cab9a-bfef-42af-b58d-20e02f626778-xtables-lock\") pod \"calico-node-kxqzg\" (UID: \"659cab9a-bfef-42af-b58d-20e02f626778\") " pod="calico-system/calico-node-kxqzg" Sep 4 17:32:18.591222 kubelet[3282]: I0904 17:32:18.590880 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/659cab9a-bfef-42af-b58d-20e02f626778-flexvol-driver-host\") pod \"calico-node-kxqzg\" (UID: \"659cab9a-bfef-42af-b58d-20e02f626778\") " pod="calico-system/calico-node-kxqzg" Sep 4 17:32:18.591222 kubelet[3282]: I0904 17:32:18.590929 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/659cab9a-bfef-42af-b58d-20e02f626778-policysync\") pod \"calico-node-kxqzg\" (UID: \"659cab9a-bfef-42af-b58d-20e02f626778\") " pod="calico-system/calico-node-kxqzg" Sep 4 17:32:18.591222 kubelet[3282]: I0904 17:32:18.590963 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/659cab9a-bfef-42af-b58d-20e02f626778-cni-net-dir\") pod \"calico-node-kxqzg\" (UID: \"659cab9a-bfef-42af-b58d-20e02f626778\") " pod="calico-system/calico-node-kxqzg" Sep 4 17:32:18.591222 kubelet[3282]: I0904 17:32:18.590998 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/659cab9a-bfef-42af-b58d-20e02f626778-lib-modules\") pod \"calico-node-kxqzg\" (UID: \"659cab9a-bfef-42af-b58d-20e02f626778\") " pod="calico-system/calico-node-kxqzg" Sep 4 17:32:18.591631 kubelet[3282]: I0904 17:32:18.591027 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/659cab9a-bfef-42af-b58d-20e02f626778-cni-bin-dir\") pod \"calico-node-kxqzg\" (UID: \"659cab9a-bfef-42af-b58d-20e02f626778\") " pod="calico-system/calico-node-kxqzg" Sep 4 17:32:18.591631 kubelet[3282]: I0904 17:32:18.591060 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/659cab9a-bfef-42af-b58d-20e02f626778-tigera-ca-bundle\") pod \"calico-node-kxqzg\" (UID: \"659cab9a-bfef-42af-b58d-20e02f626778\") " pod="calico-system/calico-node-kxqzg" Sep 4 17:32:18.591631 kubelet[3282]: I0904 17:32:18.591090 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/659cab9a-bfef-42af-b58d-20e02f626778-var-run-calico\") pod \"calico-node-kxqzg\" (UID: \"659cab9a-bfef-42af-b58d-20e02f626778\") " pod="calico-system/calico-node-kxqzg" Sep 4 17:32:18.591631 kubelet[3282]: I0904 17:32:18.591125 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/659cab9a-bfef-42af-b58d-20e02f626778-var-lib-calico\") pod \"calico-node-kxqzg\" (UID: \"659cab9a-bfef-42af-b58d-20e02f626778\") " pod="calico-system/calico-node-kxqzg" Sep 4 17:32:18.591631 kubelet[3282]: I0904 17:32:18.591175 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67jdh\" (UniqueName: \"kubernetes.io/projected/659cab9a-bfef-42af-b58d-20e02f626778-kube-api-access-67jdh\") pod \"calico-node-kxqzg\" (UID: \"659cab9a-bfef-42af-b58d-20e02f626778\") " pod="calico-system/calico-node-kxqzg" Sep 4 17:32:18.592089 kubelet[3282]: I0904 17:32:18.591218 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/659cab9a-bfef-42af-b58d-20e02f626778-node-certs\") pod \"calico-node-kxqzg\" (UID: \"659cab9a-bfef-42af-b58d-20e02f626778\") " pod="calico-system/calico-node-kxqzg" Sep 4 17:32:18.592089 kubelet[3282]: I0904 17:32:18.591246 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/659cab9a-bfef-42af-b58d-20e02f626778-cni-log-dir\") pod \"calico-node-kxqzg\" (UID: \"659cab9a-bfef-42af-b58d-20e02f626778\") " pod="calico-system/calico-node-kxqzg" Sep 4 17:32:18.651140 containerd[1692]: time="2024-09-04T17:32:18.651068224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-889bf75-57j8t,Uid:d7dd02c0-9911-4ca7-b60d-374ea42882ca,Namespace:calico-system,Attempt:0,}" Sep 4 17:32:18.698847 kubelet[3282]: E0904 17:32:18.698816 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.699108 kubelet[3282]: W0904 17:32:18.698910 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.699108 kubelet[3282]: E0904 17:32:18.698950 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.700896 kubelet[3282]: E0904 17:32:18.700709 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.700896 kubelet[3282]: W0904 17:32:18.700738 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.700896 kubelet[3282]: E0904 17:32:18.700860 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.707281 kubelet[3282]: E0904 17:32:18.705572 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.707281 kubelet[3282]: W0904 17:32:18.705593 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.707281 kubelet[3282]: E0904 17:32:18.707199 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.731361 kubelet[3282]: E0904 17:32:18.731251 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.731361 kubelet[3282]: W0904 17:32:18.731280 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.731361 kubelet[3282]: E0904 17:32:18.731310 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.737192 kubelet[3282]: I0904 17:32:18.732971 3282 topology_manager.go:215] "Topology Admit Handler" podUID="516fb432-35a0-42b1-a39f-352e51299738" podNamespace="calico-system" podName="csi-node-driver-dw5hs" Sep 4 17:32:18.737192 kubelet[3282]: E0904 17:32:18.733379 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dw5hs" podUID="516fb432-35a0-42b1-a39f-352e51299738" Sep 4 17:32:18.738495 containerd[1692]: time="2024-09-04T17:32:18.737876183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:32:18.738495 containerd[1692]: time="2024-09-04T17:32:18.737965784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:18.738495 containerd[1692]: time="2024-09-04T17:32:18.737996185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:32:18.738495 containerd[1692]: time="2024-09-04T17:32:18.738017285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:18.781362 systemd[1]: Started cri-containerd-0a8f7e12223db612d5d92157772988de4168fdc8afca1b6b7d73715c31937c96.scope - libcontainer container 0a8f7e12223db612d5d92157772988de4168fdc8afca1b6b7d73715c31937c96. Sep 4 17:32:18.791970 kubelet[3282]: E0904 17:32:18.791933 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.791970 kubelet[3282]: W0904 17:32:18.791970 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.792183 kubelet[3282]: E0904 17:32:18.792107 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.793000 kubelet[3282]: E0904 17:32:18.792978 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.793000 kubelet[3282]: W0904 17:32:18.792996 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.793139 kubelet[3282]: E0904 17:32:18.793022 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.794198 kubelet[3282]: E0904 17:32:18.794154 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.794198 kubelet[3282]: W0904 17:32:18.794197 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.794334 kubelet[3282]: E0904 17:32:18.794216 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.796777 kubelet[3282]: E0904 17:32:18.796754 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.796777 kubelet[3282]: W0904 17:32:18.796775 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.796915 kubelet[3282]: E0904 17:32:18.796792 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.797134 kubelet[3282]: E0904 17:32:18.797114 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.797134 kubelet[3282]: W0904 17:32:18.797131 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.798286 kubelet[3282]: E0904 17:32:18.797182 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.798286 kubelet[3282]: E0904 17:32:18.797427 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.798286 kubelet[3282]: W0904 17:32:18.797438 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.798286 kubelet[3282]: E0904 17:32:18.797470 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.798286 kubelet[3282]: E0904 17:32:18.797684 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.798286 kubelet[3282]: W0904 17:32:18.797694 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.798286 kubelet[3282]: E0904 17:32:18.797710 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.798286 kubelet[3282]: E0904 17:32:18.798099 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.798286 kubelet[3282]: W0904 17:32:18.798127 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.798286 kubelet[3282]: E0904 17:32:18.798144 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.798696 kubelet[3282]: E0904 17:32:18.798421 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.798696 kubelet[3282]: W0904 17:32:18.798432 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.798696 kubelet[3282]: E0904 17:32:18.798447 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.800179 kubelet[3282]: E0904 17:32:18.798869 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.800179 kubelet[3282]: W0904 17:32:18.798883 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.800179 kubelet[3282]: E0904 17:32:18.798898 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.800179 kubelet[3282]: E0904 17:32:18.799575 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.800179 kubelet[3282]: W0904 17:32:18.799588 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.800179 kubelet[3282]: E0904 17:32:18.799607 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.800179 kubelet[3282]: E0904 17:32:18.799980 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.800179 kubelet[3282]: W0904 17:32:18.799991 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.800179 kubelet[3282]: E0904 17:32:18.800006 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.800570 kubelet[3282]: E0904 17:32:18.800410 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.800570 kubelet[3282]: W0904 17:32:18.800422 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.800570 kubelet[3282]: E0904 17:32:18.800439 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.803177 kubelet[3282]: E0904 17:32:18.802361 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.803177 kubelet[3282]: W0904 17:32:18.802377 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.803177 kubelet[3282]: E0904 17:32:18.802396 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.803177 kubelet[3282]: E0904 17:32:18.802595 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.803177 kubelet[3282]: W0904 17:32:18.802604 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.803177 kubelet[3282]: E0904 17:32:18.802619 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.803177 kubelet[3282]: E0904 17:32:18.802808 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.803177 kubelet[3282]: W0904 17:32:18.802819 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.803177 kubelet[3282]: E0904 17:32:18.802834 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.803177 kubelet[3282]: E0904 17:32:18.803084 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.803966 kubelet[3282]: W0904 17:32:18.803096 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.803966 kubelet[3282]: E0904 17:32:18.803112 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.803966 kubelet[3282]: E0904 17:32:18.803351 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.803966 kubelet[3282]: W0904 17:32:18.803362 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.803966 kubelet[3282]: E0904 17:32:18.803379 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.803966 kubelet[3282]: E0904 17:32:18.803566 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.803966 kubelet[3282]: W0904 17:32:18.803575 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.803966 kubelet[3282]: E0904 17:32:18.803589 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.803966 kubelet[3282]: E0904 17:32:18.803779 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.803966 kubelet[3282]: W0904 17:32:18.803789 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.805420 kubelet[3282]: E0904 17:32:18.803805 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.805420 kubelet[3282]: E0904 17:32:18.804144 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.805420 kubelet[3282]: W0904 17:32:18.804174 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.805420 kubelet[3282]: E0904 17:32:18.804192 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.805420 kubelet[3282]: I0904 17:32:18.804242 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/516fb432-35a0-42b1-a39f-352e51299738-socket-dir\") pod \"csi-node-driver-dw5hs\" (UID: \"516fb432-35a0-42b1-a39f-352e51299738\") " pod="calico-system/csi-node-driver-dw5hs" Sep 4 17:32:18.805622 kubelet[3282]: E0904 17:32:18.805577 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.805622 kubelet[3282]: W0904 17:32:18.805592 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.805622 kubelet[3282]: E0904 17:32:18.805613 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.805743 kubelet[3282]: I0904 17:32:18.805647 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js54s\" (UniqueName: \"kubernetes.io/projected/516fb432-35a0-42b1-a39f-352e51299738-kube-api-access-js54s\") pod \"csi-node-driver-dw5hs\" (UID: \"516fb432-35a0-42b1-a39f-352e51299738\") " pod="calico-system/csi-node-driver-dw5hs" Sep 4 17:32:18.806642 kubelet[3282]: E0904 17:32:18.805909 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.806642 kubelet[3282]: W0904 17:32:18.805927 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.806642 kubelet[3282]: E0904 17:32:18.806065 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.806642 kubelet[3282]: I0904 17:32:18.806230 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/516fb432-35a0-42b1-a39f-352e51299738-kubelet-dir\") pod \"csi-node-driver-dw5hs\" (UID: \"516fb432-35a0-42b1-a39f-352e51299738\") " pod="calico-system/csi-node-driver-dw5hs" Sep 4 17:32:18.806642 kubelet[3282]: E0904 17:32:18.806357 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.806642 kubelet[3282]: W0904 17:32:18.806367 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.806642 kubelet[3282]: E0904 17:32:18.806480 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.806642 kubelet[3282]: E0904 17:32:18.806610 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.806642 kubelet[3282]: W0904 17:32:18.806621 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.807117 kubelet[3282]: E0904 17:32:18.806641 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.807117 kubelet[3282]: E0904 17:32:18.806843 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.807117 kubelet[3282]: W0904 17:32:18.806854 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.807117 kubelet[3282]: E0904 17:32:18.806882 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.807117 kubelet[3282]: I0904 17:32:18.806912 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/516fb432-35a0-42b1-a39f-352e51299738-registration-dir\") pod \"csi-node-driver-dw5hs\" (UID: \"516fb432-35a0-42b1-a39f-352e51299738\") " pod="calico-system/csi-node-driver-dw5hs" Sep 4 17:32:18.808359 kubelet[3282]: E0904 17:32:18.807879 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.808359 kubelet[3282]: W0904 17:32:18.807898 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.808359 kubelet[3282]: E0904 17:32:18.807933 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.808359 kubelet[3282]: E0904 17:32:18.808137 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.808359 kubelet[3282]: W0904 17:32:18.808147 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.808359 kubelet[3282]: E0904 17:32:18.808172 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.810179 kubelet[3282]: E0904 17:32:18.808668 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.810179 kubelet[3282]: W0904 17:32:18.808683 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.810179 kubelet[3282]: E0904 17:32:18.808710 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.810179 kubelet[3282]: E0904 17:32:18.808919 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.810179 kubelet[3282]: W0904 17:32:18.808930 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.810450 kubelet[3282]: E0904 17:32:18.810214 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.810450 kubelet[3282]: I0904 17:32:18.810317 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/516fb432-35a0-42b1-a39f-352e51299738-varrun\") pod \"csi-node-driver-dw5hs\" (UID: \"516fb432-35a0-42b1-a39f-352e51299738\") " pod="calico-system/csi-node-driver-dw5hs" Sep 4 17:32:18.810546 kubelet[3282]: E0904 17:32:18.810533 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.810592 kubelet[3282]: W0904 17:32:18.810548 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.810592 kubelet[3282]: E0904 17:32:18.810566 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.810970 kubelet[3282]: E0904 17:32:18.810951 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.811047 kubelet[3282]: W0904 17:32:18.810968 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.811047 kubelet[3282]: E0904 17:32:18.811010 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.811512 kubelet[3282]: E0904 17:32:18.811277 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.811512 kubelet[3282]: W0904 17:32:18.811304 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.811512 kubelet[3282]: E0904 17:32:18.811322 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.811671 kubelet[3282]: E0904 17:32:18.811585 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.811671 kubelet[3282]: W0904 17:32:18.811595 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.811671 kubelet[3282]: E0904 17:32:18.811634 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.812378 kubelet[3282]: E0904 17:32:18.811920 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.812378 kubelet[3282]: W0904 17:32:18.811935 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.812378 kubelet[3282]: E0904 17:32:18.811955 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.819861 containerd[1692]: time="2024-09-04T17:32:18.819808160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kxqzg,Uid:659cab9a-bfef-42af-b58d-20e02f626778,Namespace:calico-system,Attempt:0,}" Sep 4 17:32:18.886404 containerd[1692]: time="2024-09-04T17:32:18.886063273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:32:18.886404 containerd[1692]: time="2024-09-04T17:32:18.886147374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:18.886404 containerd[1692]: time="2024-09-04T17:32:18.886195475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:32:18.886404 containerd[1692]: time="2024-09-04T17:32:18.886213275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:18.912696 kubelet[3282]: E0904 17:32:18.912354 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.912696 kubelet[3282]: W0904 17:32:18.912478 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.912696 kubelet[3282]: E0904 17:32:18.912514 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.914293 kubelet[3282]: E0904 17:32:18.913983 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.914293 kubelet[3282]: W0904 17:32:18.914015 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.914293 kubelet[3282]: E0904 17:32:18.914063 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.915729 kubelet[3282]: E0904 17:32:18.915234 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.915729 kubelet[3282]: W0904 17:32:18.915267 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.915729 kubelet[3282]: E0904 17:32:18.915324 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.915729 kubelet[3282]: E0904 17:32:18.915713 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.915729 kubelet[3282]: W0904 17:32:18.915726 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.916048 kubelet[3282]: E0904 17:32:18.915759 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.918196 kubelet[3282]: E0904 17:32:18.917382 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.918196 kubelet[3282]: W0904 17:32:18.917397 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.918196 kubelet[3282]: E0904 17:32:18.917530 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.919682 kubelet[3282]: E0904 17:32:18.919253 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.919682 kubelet[3282]: W0904 17:32:18.919271 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.919682 kubelet[3282]: E0904 17:32:18.919317 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.920469 kubelet[3282]: E0904 17:32:18.919713 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.920469 kubelet[3282]: W0904 17:32:18.919725 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.920469 kubelet[3282]: E0904 17:32:18.920409 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.920896 kubelet[3282]: E0904 17:32:18.920759 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.921443 kubelet[3282]: W0904 17:32:18.921198 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.922262 kubelet[3282]: E0904 17:32:18.922128 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.922262 kubelet[3282]: E0904 17:32:18.922236 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.922262 kubelet[3282]: W0904 17:32:18.922245 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.923042 kubelet[3282]: E0904 17:32:18.922340 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.923798 kubelet[3282]: E0904 17:32:18.923682 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.923798 kubelet[3282]: W0904 17:32:18.923698 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.924417 kubelet[3282]: E0904 17:32:18.924223 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.924944 kubelet[3282]: E0904 17:32:18.924732 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.924944 kubelet[3282]: W0904 17:32:18.924750 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.925874 kubelet[3282]: E0904 17:32:18.925229 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.925952 kubelet[3282]: E0904 17:32:18.925934 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.926006 kubelet[3282]: W0904 17:32:18.925952 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.926506 kubelet[3282]: E0904 17:32:18.926425 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.928465 kubelet[3282]: E0904 17:32:18.926927 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.928465 kubelet[3282]: W0904 17:32:18.926942 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.928465 kubelet[3282]: E0904 17:32:18.927027 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.928465 kubelet[3282]: E0904 17:32:18.927670 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.928465 kubelet[3282]: W0904 17:32:18.927682 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.928465 kubelet[3282]: E0904 17:32:18.927949 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.930856 kubelet[3282]: E0904 17:32:18.929014 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.930856 kubelet[3282]: W0904 17:32:18.929035 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.930856 kubelet[3282]: E0904 17:32:18.929376 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.930856 kubelet[3282]: E0904 17:32:18.930075 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.930856 kubelet[3282]: W0904 17:32:18.930088 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.931761 kubelet[3282]: E0904 17:32:18.931545 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.931761 kubelet[3282]: E0904 17:32:18.931613 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.931761 kubelet[3282]: W0904 17:32:18.931622 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.931761 kubelet[3282]: E0904 17:32:18.931725 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.931923 kubelet[3282]: E0904 17:32:18.931905 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.931923 kubelet[3282]: W0904 17:32:18.931915 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.932616 kubelet[3282]: E0904 17:32:18.932589 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.933326 kubelet[3282]: E0904 17:32:18.933304 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.933326 kubelet[3282]: W0904 17:32:18.933321 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.933861 kubelet[3282]: E0904 17:32:18.933838 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.935660 kubelet[3282]: E0904 17:32:18.934363 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.935660 kubelet[3282]: W0904 17:32:18.934379 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.935660 kubelet[3282]: E0904 17:32:18.934641 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.935660 kubelet[3282]: E0904 17:32:18.934957 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.935660 kubelet[3282]: W0904 17:32:18.934969 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.935660 kubelet[3282]: E0904 17:32:18.935088 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.935537 systemd[1]: Started cri-containerd-5dc15d56a5ce19d36a4c7c109d1f508fccf2b5ebdd11df56585bef89d7f12728.scope - libcontainer container 5dc15d56a5ce19d36a4c7c109d1f508fccf2b5ebdd11df56585bef89d7f12728. Sep 4 17:32:18.937395 kubelet[3282]: E0904 17:32:18.937050 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.937395 kubelet[3282]: W0904 17:32:18.937066 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.937640 kubelet[3282]: E0904 17:32:18.937606 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.938200 kubelet[3282]: E0904 17:32:18.937811 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.938200 kubelet[3282]: W0904 17:32:18.937826 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.938200 kubelet[3282]: E0904 17:32:18.937982 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.938744 kubelet[3282]: E0904 17:32:18.938722 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.939730 kubelet[3282]: W0904 17:32:18.938844 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.939730 kubelet[3282]: E0904 17:32:18.938878 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.943181 kubelet[3282]: E0904 17:32:18.942050 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.943181 kubelet[3282]: W0904 17:32:18.942068 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.943181 kubelet[3282]: E0904 17:32:18.942086 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.943349 containerd[1692]: time="2024-09-04T17:32:18.942594123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-889bf75-57j8t,Uid:d7dd02c0-9911-4ca7-b60d-374ea42882ca,Namespace:calico-system,Attempt:0,} returns sandbox id \"0a8f7e12223db612d5d92157772988de4168fdc8afca1b6b7d73715c31937c96\"" Sep 4 17:32:18.946269 containerd[1692]: time="2024-09-04T17:32:18.944752859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 17:32:18.961560 kubelet[3282]: E0904 17:32:18.961515 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:18.961560 kubelet[3282]: W0904 17:32:18.961542 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:18.961560 kubelet[3282]: E0904 17:32:18.961571 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:18.994644 containerd[1692]: time="2024-09-04T17:32:18.994587297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kxqzg,Uid:659cab9a-bfef-42af-b58d-20e02f626778,Namespace:calico-system,Attempt:0,} returns sandbox id \"5dc15d56a5ce19d36a4c7c109d1f508fccf2b5ebdd11df56585bef89d7f12728\"" Sep 4 17:32:19.441119 update_engine[1676]: I0904 17:32:19.440313 1676 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 17:32:19.441119 update_engine[1676]: I0904 17:32:19.440685 1676 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 17:32:19.441119 update_engine[1676]: I0904 17:32:19.440969 1676 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 17:32:19.461123 update_engine[1676]: E0904 17:32:19.460973 1676 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 17:32:19.461123 update_engine[1676]: I0904 17:32:19.461071 1676 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 4 17:32:20.775778 kubelet[3282]: E0904 17:32:20.775687 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dw5hs" podUID="516fb432-35a0-42b1-a39f-352e51299738" Sep 4 17:32:21.931483 containerd[1692]: time="2024-09-04T17:32:21.931422610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:21.935391 containerd[1692]: time="2024-09-04T17:32:21.935320467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Sep 4 17:32:21.940201 containerd[1692]: time="2024-09-04T17:32:21.940122836Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:21.943476 containerd[1692]: time="2024-09-04T17:32:21.943418184Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:21.944299 containerd[1692]: time="2024-09-04T17:32:21.944125295Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 2.999325534s" Sep 4 17:32:21.944299 containerd[1692]: time="2024-09-04T17:32:21.944184496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Sep 4 17:32:21.945243 containerd[1692]: time="2024-09-04T17:32:21.945066008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 17:32:21.966721 containerd[1692]: time="2024-09-04T17:32:21.966458519Z" level=info msg="CreateContainer within sandbox \"0a8f7e12223db612d5d92157772988de4168fdc8afca1b6b7d73715c31937c96\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:32:22.020847 containerd[1692]: time="2024-09-04T17:32:22.020790909Z" level=info msg="CreateContainer within sandbox \"0a8f7e12223db612d5d92157772988de4168fdc8afca1b6b7d73715c31937c96\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fde6a3c34dd4d2ed4ef625dee2b797e6b0e18d9e2d2492add9a91bfd2ad43438\"" Sep 4 17:32:22.021635 containerd[1692]: time="2024-09-04T17:32:22.021395418Z" level=info msg="StartContainer for \"fde6a3c34dd4d2ed4ef625dee2b797e6b0e18d9e2d2492add9a91bfd2ad43438\"" Sep 4 17:32:22.058406 systemd[1]: Started cri-containerd-fde6a3c34dd4d2ed4ef625dee2b797e6b0e18d9e2d2492add9a91bfd2ad43438.scope - libcontainer container fde6a3c34dd4d2ed4ef625dee2b797e6b0e18d9e2d2492add9a91bfd2ad43438. Sep 4 17:32:22.120233 containerd[1692]: time="2024-09-04T17:32:22.120176154Z" level=info msg="StartContainer for \"fde6a3c34dd4d2ed4ef625dee2b797e6b0e18d9e2d2492add9a91bfd2ad43438\" returns successfully" Sep 4 17:32:22.776647 kubelet[3282]: E0904 17:32:22.776589 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dw5hs" podUID="516fb432-35a0-42b1-a39f-352e51299738" Sep 4 17:32:22.930927 kubelet[3282]: E0904 17:32:22.930882 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.930927 kubelet[3282]: W0904 17:32:22.930908 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.930927 kubelet[3282]: E0904 17:32:22.930936 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.931459 kubelet[3282]: E0904 17:32:22.931250 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.931459 kubelet[3282]: W0904 17:32:22.931263 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.931459 kubelet[3282]: E0904 17:32:22.931283 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.931648 kubelet[3282]: E0904 17:32:22.931483 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.931648 kubelet[3282]: W0904 17:32:22.931493 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.931648 kubelet[3282]: E0904 17:32:22.931509 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.931906 kubelet[3282]: E0904 17:32:22.931696 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.931906 kubelet[3282]: W0904 17:32:22.931707 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.931906 kubelet[3282]: E0904 17:32:22.931722 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.932036 kubelet[3282]: E0904 17:32:22.932015 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.932036 kubelet[3282]: W0904 17:32:22.932027 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.932129 kubelet[3282]: E0904 17:32:22.932043 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.932278 kubelet[3282]: E0904 17:32:22.932231 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.932278 kubelet[3282]: W0904 17:32:22.932248 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.932278 kubelet[3282]: E0904 17:32:22.932263 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.932536 kubelet[3282]: E0904 17:32:22.932457 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.932536 kubelet[3282]: W0904 17:32:22.932468 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.932536 kubelet[3282]: E0904 17:32:22.932483 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.932828 kubelet[3282]: E0904 17:32:22.932663 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.932828 kubelet[3282]: W0904 17:32:22.932673 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.932828 kubelet[3282]: E0904 17:32:22.932744 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.933044 kubelet[3282]: E0904 17:32:22.932990 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.933044 kubelet[3282]: W0904 17:32:22.933003 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.933044 kubelet[3282]: E0904 17:32:22.933019 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.933270 kubelet[3282]: E0904 17:32:22.933215 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.933270 kubelet[3282]: W0904 17:32:22.933225 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.933270 kubelet[3282]: E0904 17:32:22.933242 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.933468 kubelet[3282]: E0904 17:32:22.933418 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.933468 kubelet[3282]: W0904 17:32:22.933428 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.933468 kubelet[3282]: E0904 17:32:22.933444 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.933724 kubelet[3282]: E0904 17:32:22.933673 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.933724 kubelet[3282]: W0904 17:32:22.933687 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.933724 kubelet[3282]: E0904 17:32:22.933705 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.933913 kubelet[3282]: E0904 17:32:22.933896 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.933913 kubelet[3282]: W0904 17:32:22.933909 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.934085 kubelet[3282]: E0904 17:32:22.933924 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.934214 kubelet[3282]: E0904 17:32:22.934143 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.934214 kubelet[3282]: W0904 17:32:22.934153 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.934377 kubelet[3282]: E0904 17:32:22.934227 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.934441 kubelet[3282]: E0904 17:32:22.934424 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.934441 kubelet[3282]: W0904 17:32:22.934434 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.934555 kubelet[3282]: E0904 17:32:22.934451 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.955175 kubelet[3282]: E0904 17:32:22.954992 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.955175 kubelet[3282]: W0904 17:32:22.955022 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.955175 kubelet[3282]: E0904 17:32:22.955052 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.955838 kubelet[3282]: E0904 17:32:22.955672 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.955838 kubelet[3282]: W0904 17:32:22.955689 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.955838 kubelet[3282]: E0904 17:32:22.955717 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.956316 kubelet[3282]: E0904 17:32:22.956176 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.956316 kubelet[3282]: W0904 17:32:22.956191 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.956316 kubelet[3282]: E0904 17:32:22.956219 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.956830 kubelet[3282]: E0904 17:32:22.956658 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.956830 kubelet[3282]: W0904 17:32:22.956671 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.956830 kubelet[3282]: E0904 17:32:22.956711 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.957180 kubelet[3282]: E0904 17:32:22.957026 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.957180 kubelet[3282]: W0904 17:32:22.957038 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.957180 kubelet[3282]: E0904 17:32:22.957151 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.957701 kubelet[3282]: E0904 17:32:22.957495 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.957701 kubelet[3282]: W0904 17:32:22.957509 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.957701 kubelet[3282]: E0904 17:32:22.957532 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.958137 kubelet[3282]: E0904 17:32:22.957903 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.958137 kubelet[3282]: W0904 17:32:22.957917 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.958137 kubelet[3282]: E0904 17:32:22.957958 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.958666 kubelet[3282]: E0904 17:32:22.958379 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.958666 kubelet[3282]: W0904 17:32:22.958392 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.958666 kubelet[3282]: E0904 17:32:22.958432 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.958666 kubelet[3282]: E0904 17:32:22.958609 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.958666 kubelet[3282]: W0904 17:32:22.958618 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.958666 kubelet[3282]: E0904 17:32:22.958646 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.959382 kubelet[3282]: E0904 17:32:22.959202 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.959382 kubelet[3282]: W0904 17:32:22.959216 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.959382 kubelet[3282]: E0904 17:32:22.959246 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.959878 kubelet[3282]: E0904 17:32:22.959574 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.959878 kubelet[3282]: W0904 17:32:22.959587 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.959878 kubelet[3282]: E0904 17:32:22.959616 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.960310 kubelet[3282]: E0904 17:32:22.960295 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.960495 kubelet[3282]: W0904 17:32:22.960408 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.960742 kubelet[3282]: E0904 17:32:22.960731 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.960886 kubelet[3282]: W0904 17:32:22.960810 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.961238 kubelet[3282]: E0904 17:32:22.961089 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.961238 kubelet[3282]: W0904 17:32:22.961102 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.961238 kubelet[3282]: E0904 17:32:22.961118 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.961393 kubelet[3282]: E0904 17:32:22.961245 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.961393 kubelet[3282]: E0904 17:32:22.961328 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.962051 kubelet[3282]: E0904 17:32:22.961664 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.962051 kubelet[3282]: W0904 17:32:22.961677 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.962051 kubelet[3282]: E0904 17:32:22.961706 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.962619 kubelet[3282]: E0904 17:32:22.962464 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.962619 kubelet[3282]: W0904 17:32:22.962476 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.962619 kubelet[3282]: E0904 17:32:22.962498 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.963573 kubelet[3282]: E0904 17:32:22.963505 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.963573 kubelet[3282]: W0904 17:32:22.963518 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.963573 kubelet[3282]: E0904 17:32:22.963535 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:22.964003 kubelet[3282]: E0904 17:32:22.963943 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:32:22.964003 kubelet[3282]: W0904 17:32:22.963955 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:32:22.964003 kubelet[3282]: E0904 17:32:22.963970 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:32:23.180308 containerd[1692]: time="2024-09-04T17:32:23.180245863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:23.182802 containerd[1692]: time="2024-09-04T17:32:23.182734199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Sep 4 17:32:23.185724 containerd[1692]: time="2024-09-04T17:32:23.185659442Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:23.190840 containerd[1692]: time="2024-09-04T17:32:23.190764616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:23.191571 containerd[1692]: time="2024-09-04T17:32:23.191426925Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.246323216s" Sep 4 17:32:23.191571 containerd[1692]: time="2024-09-04T17:32:23.191468426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Sep 4 17:32:23.194029 containerd[1692]: time="2024-09-04T17:32:23.193860561Z" level=info msg="CreateContainer within sandbox \"5dc15d56a5ce19d36a4c7c109d1f508fccf2b5ebdd11df56585bef89d7f12728\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:32:23.227566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount784498200.mount: Deactivated successfully. Sep 4 17:32:23.236747 containerd[1692]: time="2024-09-04T17:32:23.236693183Z" level=info msg="CreateContainer within sandbox \"5dc15d56a5ce19d36a4c7c109d1f508fccf2b5ebdd11df56585bef89d7f12728\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a107c533c5884a469b4c26846daf883d36f92d13f4a9991f6c9f853365772a46\"" Sep 4 17:32:23.238139 containerd[1692]: time="2024-09-04T17:32:23.237356593Z" level=info msg="StartContainer for \"a107c533c5884a469b4c26846daf883d36f92d13f4a9991f6c9f853365772a46\"" Sep 4 17:32:23.294334 systemd[1]: Started cri-containerd-a107c533c5884a469b4c26846daf883d36f92d13f4a9991f6c9f853365772a46.scope - libcontainer container a107c533c5884a469b4c26846daf883d36f92d13f4a9991f6c9f853365772a46. Sep 4 17:32:23.329500 containerd[1692]: time="2024-09-04T17:32:23.329137027Z" level=info msg="StartContainer for \"a107c533c5884a469b4c26846daf883d36f92d13f4a9991f6c9f853365772a46\" returns successfully" Sep 4 17:32:23.338422 systemd[1]: cri-containerd-a107c533c5884a469b4c26846daf883d36f92d13f4a9991f6c9f853365772a46.scope: Deactivated successfully. Sep 4 17:32:23.377861 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a107c533c5884a469b4c26846daf883d36f92d13f4a9991f6c9f853365772a46-rootfs.mount: Deactivated successfully. Sep 4 17:32:23.891213 kubelet[3282]: I0904 17:32:23.890481 3282 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:32:23.927178 kubelet[3282]: I0904 17:32:23.926270 3282 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-889bf75-57j8t" podStartSLOduration=2.925761454 podStartE2EDuration="5.926199006s" podCreationTimestamp="2024-09-04 17:32:18 +0000 UTC" firstStartedPulling="2024-09-04 17:32:18.944308852 +0000 UTC m=+21.744307370" lastFinishedPulling="2024-09-04 17:32:21.944746304 +0000 UTC m=+24.744744922" observedRunningTime="2024-09-04 17:32:22.894397708 +0000 UTC m=+25.694396326" watchObservedRunningTime="2024-09-04 17:32:23.926199006 +0000 UTC m=+26.726197524" Sep 4 17:32:24.637411 containerd[1692]: time="2024-09-04T17:32:24.637313743Z" level=info msg="shim disconnected" id=a107c533c5884a469b4c26846daf883d36f92d13f4a9991f6c9f853365772a46 namespace=k8s.io Sep 4 17:32:24.637411 containerd[1692]: time="2024-09-04T17:32:24.637385644Z" level=warning msg="cleaning up after shim disconnected" id=a107c533c5884a469b4c26846daf883d36f92d13f4a9991f6c9f853365772a46 namespace=k8s.io Sep 4 17:32:24.637411 containerd[1692]: time="2024-09-04T17:32:24.637401744Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:32:24.776571 kubelet[3282]: E0904 17:32:24.776500 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dw5hs" podUID="516fb432-35a0-42b1-a39f-352e51299738" Sep 4 17:32:24.895545 containerd[1692]: time="2024-09-04T17:32:24.895380494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 17:32:26.776317 kubelet[3282]: E0904 17:32:26.776276 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dw5hs" podUID="516fb432-35a0-42b1-a39f-352e51299738" Sep 4 17:32:28.776222 kubelet[3282]: E0904 17:32:28.776170 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dw5hs" podUID="516fb432-35a0-42b1-a39f-352e51299738" Sep 4 17:32:29.028079 containerd[1692]: time="2024-09-04T17:32:29.027934950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:29.029992 containerd[1692]: time="2024-09-04T17:32:29.029934486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Sep 4 17:32:29.033942 containerd[1692]: time="2024-09-04T17:32:29.033882956Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:29.038217 containerd[1692]: time="2024-09-04T17:32:29.038144032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:29.038956 containerd[1692]: time="2024-09-04T17:32:29.038818744Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 4.14338905s" Sep 4 17:32:29.038956 containerd[1692]: time="2024-09-04T17:32:29.038859245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Sep 4 17:32:29.041295 containerd[1692]: time="2024-09-04T17:32:29.041262788Z" level=info msg="CreateContainer within sandbox \"5dc15d56a5ce19d36a4c7c109d1f508fccf2b5ebdd11df56585bef89d7f12728\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 17:32:29.085899 containerd[1692]: time="2024-09-04T17:32:29.085859682Z" level=info msg="CreateContainer within sandbox \"5dc15d56a5ce19d36a4c7c109d1f508fccf2b5ebdd11df56585bef89d7f12728\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ce5e24392162a7342a4b8a07ad9b5fe2040022e8494fb8c3594cd9875e78f72c\"" Sep 4 17:32:29.088050 containerd[1692]: time="2024-09-04T17:32:29.086366391Z" level=info msg="StartContainer for \"ce5e24392162a7342a4b8a07ad9b5fe2040022e8494fb8c3594cd9875e78f72c\"" Sep 4 17:32:29.124314 systemd[1]: Started cri-containerd-ce5e24392162a7342a4b8a07ad9b5fe2040022e8494fb8c3594cd9875e78f72c.scope - libcontainer container ce5e24392162a7342a4b8a07ad9b5fe2040022e8494fb8c3594cd9875e78f72c. Sep 4 17:32:29.155281 containerd[1692]: time="2024-09-04T17:32:29.155226019Z" level=info msg="StartContainer for \"ce5e24392162a7342a4b8a07ad9b5fe2040022e8494fb8c3594cd9875e78f72c\" returns successfully" Sep 4 17:32:29.440889 update_engine[1676]: I0904 17:32:29.440820 1676 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 17:32:29.441496 update_engine[1676]: I0904 17:32:29.441115 1676 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 17:32:29.441561 update_engine[1676]: I0904 17:32:29.441491 1676 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 17:32:29.463293 update_engine[1676]: E0904 17:32:29.463238 1676 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 17:32:29.463475 update_engine[1676]: I0904 17:32:29.463330 1676 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 4 17:32:30.553366 containerd[1692]: time="2024-09-04T17:32:30.553295963Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:32:30.555563 systemd[1]: cri-containerd-ce5e24392162a7342a4b8a07ad9b5fe2040022e8494fb8c3594cd9875e78f72c.scope: Deactivated successfully. Sep 4 17:32:30.580887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce5e24392162a7342a4b8a07ad9b5fe2040022e8494fb8c3594cd9875e78f72c-rootfs.mount: Deactivated successfully. Sep 4 17:32:30.631639 kubelet[3282]: I0904 17:32:30.631311 3282 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Sep 4 17:32:31.079715 kubelet[3282]: I0904 17:32:30.652417 3282 topology_manager.go:215] "Topology Admit Handler" podUID="39aed6ce-ad34-481b-9f5d-53f97e2a213b" podNamespace="kube-system" podName="coredns-76f75df574-dshhv" Sep 4 17:32:31.079715 kubelet[3282]: I0904 17:32:30.658559 3282 topology_manager.go:215] "Topology Admit Handler" podUID="de6c1c61-155f-48e7-a53f-5ba66c45e5f2" podNamespace="kube-system" podName="coredns-76f75df574-8jztd" Sep 4 17:32:31.079715 kubelet[3282]: I0904 17:32:30.659000 3282 topology_manager.go:215] "Topology Admit Handler" podUID="87a16e16-c2e6-4905-8da8-5de334819488" podNamespace="calico-system" podName="calico-kube-controllers-7c9db4776-46qgr" Sep 4 17:32:31.079715 kubelet[3282]: I0904 17:32:30.712865 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zn7v\" (UniqueName: \"kubernetes.io/projected/de6c1c61-155f-48e7-a53f-5ba66c45e5f2-kube-api-access-4zn7v\") pod \"coredns-76f75df574-8jztd\" (UID: \"de6c1c61-155f-48e7-a53f-5ba66c45e5f2\") " pod="kube-system/coredns-76f75df574-8jztd" Sep 4 17:32:31.079715 kubelet[3282]: I0904 17:32:30.712962 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gbks\" (UniqueName: \"kubernetes.io/projected/39aed6ce-ad34-481b-9f5d-53f97e2a213b-kube-api-access-2gbks\") pod \"coredns-76f75df574-dshhv\" (UID: \"39aed6ce-ad34-481b-9f5d-53f97e2a213b\") " pod="kube-system/coredns-76f75df574-dshhv" Sep 4 17:32:31.079715 kubelet[3282]: I0904 17:32:30.712998 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87a16e16-c2e6-4905-8da8-5de334819488-tigera-ca-bundle\") pod \"calico-kube-controllers-7c9db4776-46qgr\" (UID: \"87a16e16-c2e6-4905-8da8-5de334819488\") " pod="calico-system/calico-kube-controllers-7c9db4776-46qgr" Sep 4 17:32:30.666039 systemd[1]: Created slice kubepods-burstable-pod39aed6ce_ad34_481b_9f5d_53f97e2a213b.slice - libcontainer container kubepods-burstable-pod39aed6ce_ad34_481b_9f5d_53f97e2a213b.slice. Sep 4 17:32:31.082023 kubelet[3282]: I0904 17:32:30.713035 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8z56\" (UniqueName: \"kubernetes.io/projected/87a16e16-c2e6-4905-8da8-5de334819488-kube-api-access-r8z56\") pod \"calico-kube-controllers-7c9db4776-46qgr\" (UID: \"87a16e16-c2e6-4905-8da8-5de334819488\") " pod="calico-system/calico-kube-controllers-7c9db4776-46qgr" Sep 4 17:32:31.082023 kubelet[3282]: I0904 17:32:30.713080 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de6c1c61-155f-48e7-a53f-5ba66c45e5f2-config-volume\") pod \"coredns-76f75df574-8jztd\" (UID: \"de6c1c61-155f-48e7-a53f-5ba66c45e5f2\") " pod="kube-system/coredns-76f75df574-8jztd" Sep 4 17:32:31.082023 kubelet[3282]: I0904 17:32:30.713136 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39aed6ce-ad34-481b-9f5d-53f97e2a213b-config-volume\") pod \"coredns-76f75df574-dshhv\" (UID: \"39aed6ce-ad34-481b-9f5d-53f97e2a213b\") " pod="kube-system/coredns-76f75df574-dshhv" Sep 4 17:32:31.082204 containerd[1692]: time="2024-09-04T17:32:31.080496120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dw5hs,Uid:516fb432-35a0-42b1-a39f-352e51299738,Namespace:calico-system,Attempt:0,}" Sep 4 17:32:30.675287 systemd[1]: Created slice kubepods-burstable-podde6c1c61_155f_48e7_a53f_5ba66c45e5f2.slice - libcontainer container kubepods-burstable-podde6c1c61_155f_48e7_a53f_5ba66c45e5f2.slice. Sep 4 17:32:30.685904 systemd[1]: Created slice kubepods-besteffort-pod87a16e16_c2e6_4905_8da8_5de334819488.slice - libcontainer container kubepods-besteffort-pod87a16e16_c2e6_4905_8da8_5de334819488.slice. Sep 4 17:32:30.782358 systemd[1]: Created slice kubepods-besteffort-pod516fb432_35a0_42b1_a39f_352e51299738.slice - libcontainer container kubepods-besteffort-pod516fb432_35a0_42b1_a39f_352e51299738.slice. Sep 4 17:32:31.382101 containerd[1692]: time="2024-09-04T17:32:31.381952841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dshhv,Uid:39aed6ce-ad34-481b-9f5d-53f97e2a213b,Namespace:kube-system,Attempt:0,}" Sep 4 17:32:31.385623 containerd[1692]: time="2024-09-04T17:32:31.385581002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8jztd,Uid:de6c1c61-155f-48e7-a53f-5ba66c45e5f2,Namespace:kube-system,Attempt:0,}" Sep 4 17:32:31.388267 containerd[1692]: time="2024-09-04T17:32:31.388139146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c9db4776-46qgr,Uid:87a16e16-c2e6-4905-8da8-5de334819488,Namespace:calico-system,Attempt:0,}" Sep 4 17:32:33.012497 containerd[1692]: time="2024-09-04T17:32:33.012396139Z" level=info msg="shim disconnected" id=ce5e24392162a7342a4b8a07ad9b5fe2040022e8494fb8c3594cd9875e78f72c namespace=k8s.io Sep 4 17:32:33.013134 containerd[1692]: time="2024-09-04T17:32:33.012520141Z" level=warning msg="cleaning up after shim disconnected" id=ce5e24392162a7342a4b8a07ad9b5fe2040022e8494fb8c3594cd9875e78f72c namespace=k8s.io Sep 4 17:32:33.013134 containerd[1692]: time="2024-09-04T17:32:33.012542942Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:32:33.919595 containerd[1692]: time="2024-09-04T17:32:33.919548650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 17:32:34.212660 containerd[1692]: time="2024-09-04T17:32:34.212341624Z" level=error msg="Failed to destroy network for sandbox \"761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:34.213233 containerd[1692]: time="2024-09-04T17:32:34.212799132Z" level=error msg="encountered an error cleaning up failed sandbox \"761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:34.213233 containerd[1692]: time="2024-09-04T17:32:34.212875733Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8jztd,Uid:de6c1c61-155f-48e7-a53f-5ba66c45e5f2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:34.214298 kubelet[3282]: E0904 17:32:34.213667 3282 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:34.214298 kubelet[3282]: E0904 17:32:34.213757 3282 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-8jztd" Sep 4 17:32:34.214298 kubelet[3282]: E0904 17:32:34.213791 3282 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-8jztd" Sep 4 17:32:34.215441 kubelet[3282]: E0904 17:32:34.213863 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-8jztd_kube-system(de6c1c61-155f-48e7-a53f-5ba66c45e5f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-8jztd_kube-system(de6c1c61-155f-48e7-a53f-5ba66c45e5f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-8jztd" podUID="de6c1c61-155f-48e7-a53f-5ba66c45e5f2" Sep 4 17:32:34.215827 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e-shm.mount: Deactivated successfully. Sep 4 17:32:34.272267 containerd[1692]: time="2024-09-04T17:32:34.272207741Z" level=error msg="Failed to destroy network for sandbox \"08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:34.272623 containerd[1692]: time="2024-09-04T17:32:34.272584547Z" level=error msg="encountered an error cleaning up failed sandbox \"08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:34.272758 containerd[1692]: time="2024-09-04T17:32:34.272656249Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dw5hs,Uid:516fb432-35a0-42b1-a39f-352e51299738,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:34.272988 kubelet[3282]: E0904 17:32:34.272956 3282 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:34.273091 kubelet[3282]: E0904 17:32:34.273035 3282 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dw5hs" Sep 4 17:32:34.273091 kubelet[3282]: E0904 17:32:34.273064 3282 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dw5hs" Sep 4 17:32:34.273206 kubelet[3282]: E0904 17:32:34.273150 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dw5hs_calico-system(516fb432-35a0-42b1-a39f-352e51299738)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dw5hs_calico-system(516fb432-35a0-42b1-a39f-352e51299738)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dw5hs" podUID="516fb432-35a0-42b1-a39f-352e51299738" Sep 4 17:32:34.366007 containerd[1692]: time="2024-09-04T17:32:34.365943633Z" level=error msg="Failed to destroy network for sandbox \"40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:34.366362 containerd[1692]: time="2024-09-04T17:32:34.366327740Z" level=error msg="encountered an error cleaning up failed sandbox \"40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:34.366477 containerd[1692]: time="2024-09-04T17:32:34.366390641Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dshhv,Uid:39aed6ce-ad34-481b-9f5d-53f97e2a213b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:34.366697 kubelet[3282]: E0904 17:32:34.366667 3282 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:34.366799 kubelet[3282]: E0904 17:32:34.366740 3282 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-dshhv" Sep 4 17:32:34.366799 kubelet[3282]: E0904 17:32:34.366775 3282 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-dshhv" Sep 4 17:32:34.366884 kubelet[3282]: E0904 17:32:34.366851 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-dshhv_kube-system(39aed6ce-ad34-481b-9f5d-53f97e2a213b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-dshhv_kube-system(39aed6ce-ad34-481b-9f5d-53f97e2a213b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-dshhv" podUID="39aed6ce-ad34-481b-9f5d-53f97e2a213b" Sep 4 17:32:34.416336 containerd[1692]: time="2024-09-04T17:32:34.416279789Z" level=error msg="Failed to destroy network for sandbox \"79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:34.416724 containerd[1692]: time="2024-09-04T17:32:34.416683595Z" level=error msg="encountered an error cleaning up failed sandbox \"79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:34.416830 containerd[1692]: time="2024-09-04T17:32:34.416759997Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c9db4776-46qgr,Uid:87a16e16-c2e6-4905-8da8-5de334819488,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:34.417126 kubelet[3282]: E0904 17:32:34.417082 3282 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:34.418766 kubelet[3282]: E0904 17:32:34.417176 3282 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c9db4776-46qgr" Sep 4 17:32:34.418766 kubelet[3282]: E0904 17:32:34.417207 3282 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c9db4776-46qgr" Sep 4 17:32:34.418766 kubelet[3282]: E0904 17:32:34.417280 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c9db4776-46qgr_calico-system(87a16e16-c2e6-4905-8da8-5de334819488)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c9db4776-46qgr_calico-system(87a16e16-c2e6-4905-8da8-5de334819488)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c9db4776-46qgr" podUID="87a16e16-c2e6-4905-8da8-5de334819488" Sep 4 17:32:34.922354 kubelet[3282]: I0904 17:32:34.921577 3282 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" Sep 4 17:32:34.922740 containerd[1692]: time="2024-09-04T17:32:34.922587090Z" level=info msg="StopPodSandbox for \"79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef\"" Sep 4 17:32:34.923850 containerd[1692]: time="2024-09-04T17:32:34.923443604Z" level=info msg="Ensure that sandbox 79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef in task-service has been cleanup successfully" Sep 4 17:32:34.925195 kubelet[3282]: I0904 17:32:34.924850 3282 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" Sep 4 17:32:34.928394 containerd[1692]: time="2024-09-04T17:32:34.928244986Z" level=info msg="StopPodSandbox for \"761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e\"" Sep 4 17:32:34.928920 containerd[1692]: time="2024-09-04T17:32:34.928783195Z" level=info msg="Ensure that sandbox 761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e in task-service has been cleanup successfully" Sep 4 17:32:34.930900 kubelet[3282]: I0904 17:32:34.930782 3282 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" Sep 4 17:32:34.931748 containerd[1692]: time="2024-09-04T17:32:34.931669244Z" level=info msg="StopPodSandbox for \"40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73\"" Sep 4 17:32:34.931943 containerd[1692]: time="2024-09-04T17:32:34.931892148Z" level=info msg="Ensure that sandbox 40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73 in task-service has been cleanup successfully" Sep 4 17:32:34.937018 kubelet[3282]: I0904 17:32:34.936882 3282 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" Sep 4 17:32:34.942447 containerd[1692]: time="2024-09-04T17:32:34.942324925Z" level=info msg="StopPodSandbox for \"08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef\"" Sep 4 17:32:34.943768 containerd[1692]: time="2024-09-04T17:32:34.943404143Z" level=info msg="Ensure that sandbox 08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef in task-service has been cleanup successfully" Sep 4 17:32:34.974365 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73-shm.mount: Deactivated successfully. Sep 4 17:32:34.974491 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef-shm.mount: Deactivated successfully. Sep 4 17:32:35.013718 containerd[1692]: time="2024-09-04T17:32:35.013481934Z" level=error msg="StopPodSandbox for \"40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73\" failed" error="failed to destroy network for sandbox \"40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:35.014221 kubelet[3282]: E0904 17:32:35.014032 3282 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" Sep 4 17:32:35.014737 kubelet[3282]: E0904 17:32:35.014543 3282 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73"} Sep 4 17:32:35.014737 kubelet[3282]: E0904 17:32:35.014700 3282 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"39aed6ce-ad34-481b-9f5d-53f97e2a213b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:32:35.015226 kubelet[3282]: E0904 17:32:35.015011 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"39aed6ce-ad34-481b-9f5d-53f97e2a213b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-dshhv" podUID="39aed6ce-ad34-481b-9f5d-53f97e2a213b" Sep 4 17:32:35.015782 containerd[1692]: time="2024-09-04T17:32:35.015678671Z" level=error msg="StopPodSandbox for \"79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef\" failed" error="failed to destroy network for sandbox \"79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:35.017476 kubelet[3282]: E0904 17:32:35.017387 3282 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" Sep 4 17:32:35.017476 kubelet[3282]: E0904 17:32:35.017427 3282 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef"} Sep 4 17:32:35.018343 kubelet[3282]: E0904 17:32:35.018245 3282 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"87a16e16-c2e6-4905-8da8-5de334819488\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:32:35.018343 kubelet[3282]: E0904 17:32:35.018317 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"87a16e16-c2e6-4905-8da8-5de334819488\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c9db4776-46qgr" podUID="87a16e16-c2e6-4905-8da8-5de334819488" Sep 4 17:32:35.019588 containerd[1692]: time="2024-09-04T17:32:35.019547137Z" level=error msg="StopPodSandbox for \"761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e\" failed" error="failed to destroy network for sandbox \"761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:35.019786 kubelet[3282]: E0904 17:32:35.019766 3282 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" Sep 4 17:32:35.019875 kubelet[3282]: E0904 17:32:35.019805 3282 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e"} Sep 4 17:32:35.019875 kubelet[3282]: E0904 17:32:35.019853 3282 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"de6c1c61-155f-48e7-a53f-5ba66c45e5f2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:32:35.020034 kubelet[3282]: E0904 17:32:35.019892 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"de6c1c61-155f-48e7-a53f-5ba66c45e5f2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-8jztd" podUID="de6c1c61-155f-48e7-a53f-5ba66c45e5f2" Sep 4 17:32:35.023620 containerd[1692]: time="2024-09-04T17:32:35.023578105Z" level=error msg="StopPodSandbox for \"08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef\" failed" error="failed to destroy network for sandbox \"08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:32:35.023793 kubelet[3282]: E0904 17:32:35.023771 3282 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" Sep 4 17:32:35.023871 kubelet[3282]: E0904 17:32:35.023815 3282 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef"} Sep 4 17:32:35.023871 kubelet[3282]: E0904 17:32:35.023860 3282 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"516fb432-35a0-42b1-a39f-352e51299738\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:32:35.023978 kubelet[3282]: E0904 17:32:35.023898 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"516fb432-35a0-42b1-a39f-352e51299738\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dw5hs" podUID="516fb432-35a0-42b1-a39f-352e51299738" Sep 4 17:32:39.448070 update_engine[1676]: I0904 17:32:39.448002 1676 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 17:32:39.448656 update_engine[1676]: I0904 17:32:39.448339 1676 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 17:32:39.448656 update_engine[1676]: I0904 17:32:39.448648 1676 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 17:32:39.454660 update_engine[1676]: E0904 17:32:39.454628 1676 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 17:32:39.454786 update_engine[1676]: I0904 17:32:39.454695 1676 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 4 17:32:39.454786 update_engine[1676]: I0904 17:32:39.454704 1676 omaha_request_action.cc:617] Omaha request response: Sep 4 17:32:39.454865 update_engine[1676]: E0904 17:32:39.454798 1676 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 4 17:32:39.454865 update_engine[1676]: I0904 17:32:39.454820 1676 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 4 17:32:39.454865 update_engine[1676]: I0904 17:32:39.454825 1676 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 4 17:32:39.454865 update_engine[1676]: I0904 17:32:39.454829 1676 update_attempter.cc:306] Processing Done. Sep 4 17:32:39.454865 update_engine[1676]: E0904 17:32:39.454846 1676 update_attempter.cc:619] Update failed. Sep 4 17:32:39.454865 update_engine[1676]: I0904 17:32:39.454851 1676 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 4 17:32:39.454865 update_engine[1676]: I0904 17:32:39.454856 1676 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 4 17:32:39.454865 update_engine[1676]: I0904 17:32:39.454862 1676 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 4 17:32:39.455136 update_engine[1676]: I0904 17:32:39.454948 1676 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 4 17:32:39.455136 update_engine[1676]: I0904 17:32:39.454970 1676 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 4 17:32:39.455136 update_engine[1676]: I0904 17:32:39.454974 1676 omaha_request_action.cc:272] Request: Sep 4 17:32:39.455136 update_engine[1676]: Sep 4 17:32:39.455136 update_engine[1676]: Sep 4 17:32:39.455136 update_engine[1676]: Sep 4 17:32:39.455136 update_engine[1676]: Sep 4 17:32:39.455136 update_engine[1676]: Sep 4 17:32:39.455136 update_engine[1676]: Sep 4 17:32:39.455136 update_engine[1676]: I0904 17:32:39.454979 1676 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 17:32:39.455136 update_engine[1676]: I0904 17:32:39.455108 1676 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 17:32:39.455512 update_engine[1676]: I0904 17:32:39.455298 1676 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 17:32:39.455713 locksmithd[1728]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 4 17:32:39.480530 update_engine[1676]: E0904 17:32:39.480497 1676 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 17:32:39.480632 update_engine[1676]: I0904 17:32:39.480552 1676 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 4 17:32:39.480632 update_engine[1676]: I0904 17:32:39.480561 1676 omaha_request_action.cc:617] Omaha request response: Sep 4 17:32:39.480632 update_engine[1676]: I0904 17:32:39.480566 1676 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 4 17:32:39.480632 update_engine[1676]: I0904 17:32:39.480576 1676 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 4 17:32:39.480632 update_engine[1676]: I0904 17:32:39.480581 1676 update_attempter.cc:306] Processing Done. Sep 4 17:32:39.480632 update_engine[1676]: I0904 17:32:39.480587 1676 update_attempter.cc:310] Error event sent. Sep 4 17:32:39.480632 update_engine[1676]: I0904 17:32:39.480597 1676 update_check_scheduler.cc:74] Next update check in 47m20s Sep 4 17:32:39.481037 locksmithd[1728]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 4 17:32:41.505462 kubelet[3282]: I0904 17:32:41.505055 3282 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:32:43.867009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount619292844.mount: Deactivated successfully. Sep 4 17:32:44.169797 containerd[1692]: time="2024-09-04T17:32:44.169725597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:44.216010 containerd[1692]: time="2024-09-04T17:32:44.215902088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Sep 4 17:32:44.263309 containerd[1692]: time="2024-09-04T17:32:44.263204698Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:44.321468 containerd[1692]: time="2024-09-04T17:32:44.321350694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:44.323042 containerd[1692]: time="2024-09-04T17:32:44.322397512Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 10.40279376s" Sep 4 17:32:44.323042 containerd[1692]: time="2024-09-04T17:32:44.322453513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Sep 4 17:32:44.341381 containerd[1692]: time="2024-09-04T17:32:44.341332536Z" level=info msg="CreateContainer within sandbox \"5dc15d56a5ce19d36a4c7c109d1f508fccf2b5ebdd11df56585bef89d7f12728\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 17:32:44.672719 containerd[1692]: time="2024-09-04T17:32:44.672654910Z" level=info msg="CreateContainer within sandbox \"5dc15d56a5ce19d36a4c7c109d1f508fccf2b5ebdd11df56585bef89d7f12728\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"dfc1acd449aeea02c2ad02a075111066b0f14930e5ce6d88523f929f186c7aa8\"" Sep 4 17:32:44.674833 containerd[1692]: time="2024-09-04T17:32:44.673800330Z" level=info msg="StartContainer for \"dfc1acd449aeea02c2ad02a075111066b0f14930e5ce6d88523f929f186c7aa8\"" Sep 4 17:32:44.706382 systemd[1]: Started cri-containerd-dfc1acd449aeea02c2ad02a075111066b0f14930e5ce6d88523f929f186c7aa8.scope - libcontainer container dfc1acd449aeea02c2ad02a075111066b0f14930e5ce6d88523f929f186c7aa8. Sep 4 17:32:44.738783 containerd[1692]: time="2024-09-04T17:32:44.738647040Z" level=info msg="StartContainer for \"dfc1acd449aeea02c2ad02a075111066b0f14930e5ce6d88523f929f186c7aa8\" returns successfully" Sep 4 17:32:44.993306 kubelet[3282]: I0904 17:32:44.992757 3282 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-kxqzg" podStartSLOduration=1.667209615 podStartE2EDuration="26.992695891s" podCreationTimestamp="2024-09-04 17:32:18 +0000 UTC" firstStartedPulling="2024-09-04 17:32:18.997288042 +0000 UTC m=+21.797286560" lastFinishedPulling="2024-09-04 17:32:44.322774218 +0000 UTC m=+47.122772836" observedRunningTime="2024-09-04 17:32:44.990466653 +0000 UTC m=+47.790465171" watchObservedRunningTime="2024-09-04 17:32:44.992695891 +0000 UTC m=+47.792694509" Sep 4 17:32:45.010085 systemd[1]: run-containerd-runc-k8s.io-dfc1acd449aeea02c2ad02a075111066b0f14930e5ce6d88523f929f186c7aa8-runc.sj76Z5.mount: Deactivated successfully. Sep 4 17:32:45.086423 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 17:32:45.087373 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 17:32:45.779565 containerd[1692]: time="2024-09-04T17:32:45.778525005Z" level=info msg="StopPodSandbox for \"08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef\"" Sep 4 17:32:45.779565 containerd[1692]: time="2024-09-04T17:32:45.779258815Z" level=info msg="StopPodSandbox for \"79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef\"" Sep 4 17:32:45.782185 containerd[1692]: time="2024-09-04T17:32:45.781037839Z" level=info msg="StopPodSandbox for \"761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e\"" Sep 4 17:32:45.934267 containerd[1692]: 2024-09-04 17:32:45.867 [INFO][4379] k8s.go 608: Cleaning up netns ContainerID="79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" Sep 4 17:32:45.934267 containerd[1692]: 2024-09-04 17:32:45.868 [INFO][4379] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" iface="eth0" netns="/var/run/netns/cni-a1b67b54-d925-d704-56a7-11dd172bd811" Sep 4 17:32:45.934267 containerd[1692]: 2024-09-04 17:32:45.868 [INFO][4379] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" iface="eth0" netns="/var/run/netns/cni-a1b67b54-d925-d704-56a7-11dd172bd811" Sep 4 17:32:45.934267 containerd[1692]: 2024-09-04 17:32:45.869 [INFO][4379] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" iface="eth0" netns="/var/run/netns/cni-a1b67b54-d925-d704-56a7-11dd172bd811" Sep 4 17:32:45.934267 containerd[1692]: 2024-09-04 17:32:45.871 [INFO][4379] k8s.go 615: Releasing IP address(es) ContainerID="79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" Sep 4 17:32:45.934267 containerd[1692]: 2024-09-04 17:32:45.871 [INFO][4379] utils.go 188: Calico CNI releasing IP address ContainerID="79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" Sep 4 17:32:45.934267 containerd[1692]: 2024-09-04 17:32:45.911 [INFO][4394] ipam_plugin.go 417: Releasing address using handleID ContainerID="79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" HandleID="k8s-pod-network.79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--kube--controllers--7c9db4776--46qgr-eth0" Sep 4 17:32:45.934267 containerd[1692]: 2024-09-04 17:32:45.911 [INFO][4394] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:32:45.934267 containerd[1692]: 2024-09-04 17:32:45.911 [INFO][4394] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:32:45.934267 containerd[1692]: 2024-09-04 17:32:45.919 [WARNING][4394] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" HandleID="k8s-pod-network.79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--kube--controllers--7c9db4776--46qgr-eth0" Sep 4 17:32:45.934267 containerd[1692]: 2024-09-04 17:32:45.919 [INFO][4394] ipam_plugin.go 445: Releasing address using workloadID ContainerID="79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" HandleID="k8s-pod-network.79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--kube--controllers--7c9db4776--46qgr-eth0" Sep 4 17:32:45.934267 containerd[1692]: 2024-09-04 17:32:45.924 [INFO][4394] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:32:45.934267 containerd[1692]: 2024-09-04 17:32:45.931 [INFO][4379] k8s.go 621: Teardown processing complete. ContainerID="79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" Sep 4 17:32:45.938697 containerd[1692]: time="2024-09-04T17:32:45.934257923Z" level=info msg="TearDown network for sandbox \"79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef\" successfully" Sep 4 17:32:45.938697 containerd[1692]: time="2024-09-04T17:32:45.934298524Z" level=info msg="StopPodSandbox for \"79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef\" returns successfully" Sep 4 17:32:45.940223 containerd[1692]: time="2024-09-04T17:32:45.938944487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c9db4776-46qgr,Uid:87a16e16-c2e6-4905-8da8-5de334819488,Namespace:calico-system,Attempt:1,}" Sep 4 17:32:45.940610 systemd[1]: run-netns-cni\x2da1b67b54\x2dd925\x2dd704\x2d56a7\x2d11dd172bd811.mount: Deactivated successfully. Sep 4 17:32:45.946087 containerd[1692]: 2024-09-04 17:32:45.864 [INFO][4368] k8s.go 608: Cleaning up netns ContainerID="761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" Sep 4 17:32:45.946087 containerd[1692]: 2024-09-04 17:32:45.864 [INFO][4368] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" iface="eth0" netns="/var/run/netns/cni-08581cf3-9d7d-af25-d624-9143174e1e56" Sep 4 17:32:45.946087 containerd[1692]: 2024-09-04 17:32:45.864 [INFO][4368] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" iface="eth0" netns="/var/run/netns/cni-08581cf3-9d7d-af25-d624-9143174e1e56" Sep 4 17:32:45.946087 containerd[1692]: 2024-09-04 17:32:45.864 [INFO][4368] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" iface="eth0" netns="/var/run/netns/cni-08581cf3-9d7d-af25-d624-9143174e1e56" Sep 4 17:32:45.946087 containerd[1692]: 2024-09-04 17:32:45.864 [INFO][4368] k8s.go 615: Releasing IP address(es) ContainerID="761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" Sep 4 17:32:45.946087 containerd[1692]: 2024-09-04 17:32:45.864 [INFO][4368] utils.go 188: Calico CNI releasing IP address ContainerID="761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" Sep 4 17:32:45.946087 containerd[1692]: 2024-09-04 17:32:45.921 [INFO][4390] ipam_plugin.go 417: Releasing address using handleID ContainerID="761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" HandleID="k8s-pod-network.761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--8jztd-eth0" Sep 4 17:32:45.946087 containerd[1692]: 2024-09-04 17:32:45.922 [INFO][4390] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:32:45.946087 containerd[1692]: 2024-09-04 17:32:45.925 [INFO][4390] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:32:45.946087 containerd[1692]: 2024-09-04 17:32:45.940 [WARNING][4390] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" HandleID="k8s-pod-network.761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--8jztd-eth0" Sep 4 17:32:45.946087 containerd[1692]: 2024-09-04 17:32:45.940 [INFO][4390] ipam_plugin.go 445: Releasing address using workloadID ContainerID="761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" HandleID="k8s-pod-network.761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--8jztd-eth0" Sep 4 17:32:45.946087 containerd[1692]: 2024-09-04 17:32:45.942 [INFO][4390] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:32:45.946087 containerd[1692]: 2024-09-04 17:32:45.944 [INFO][4368] k8s.go 621: Teardown processing complete. ContainerID="761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" Sep 4 17:32:45.949404 containerd[1692]: time="2024-09-04T17:32:45.948812821Z" level=info msg="TearDown network for sandbox \"761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e\" successfully" Sep 4 17:32:45.949404 containerd[1692]: time="2024-09-04T17:32:45.948863522Z" level=info msg="StopPodSandbox for \"761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e\" returns successfully" Sep 4 17:32:45.949823 containerd[1692]: time="2024-09-04T17:32:45.949791934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8jztd,Uid:de6c1c61-155f-48e7-a53f-5ba66c45e5f2,Namespace:kube-system,Attempt:1,}" Sep 4 17:32:45.951859 systemd[1]: run-netns-cni\x2d08581cf3\x2d9d7d\x2daf25\x2dd624\x2d9143174e1e56.mount: Deactivated successfully. Sep 4 17:32:45.963804 containerd[1692]: 2024-09-04 17:32:45.881 [INFO][4361] k8s.go 608: Cleaning up netns ContainerID="08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" Sep 4 17:32:45.963804 containerd[1692]: 2024-09-04 17:32:45.881 [INFO][4361] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" iface="eth0" netns="/var/run/netns/cni-87aceba2-d216-49c5-fed1-7f4210183ccb" Sep 4 17:32:45.963804 containerd[1692]: 2024-09-04 17:32:45.881 [INFO][4361] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" iface="eth0" netns="/var/run/netns/cni-87aceba2-d216-49c5-fed1-7f4210183ccb" Sep 4 17:32:45.963804 containerd[1692]: 2024-09-04 17:32:45.881 [INFO][4361] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" iface="eth0" netns="/var/run/netns/cni-87aceba2-d216-49c5-fed1-7f4210183ccb" Sep 4 17:32:45.963804 containerd[1692]: 2024-09-04 17:32:45.881 [INFO][4361] k8s.go 615: Releasing IP address(es) ContainerID="08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" Sep 4 17:32:45.963804 containerd[1692]: 2024-09-04 17:32:45.881 [INFO][4361] utils.go 188: Calico CNI releasing IP address ContainerID="08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" Sep 4 17:32:45.963804 containerd[1692]: 2024-09-04 17:32:45.932 [INFO][4398] ipam_plugin.go 417: Releasing address using handleID ContainerID="08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" HandleID="k8s-pod-network.08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-csi--node--driver--dw5hs-eth0" Sep 4 17:32:45.963804 containerd[1692]: 2024-09-04 17:32:45.932 [INFO][4398] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:32:45.963804 containerd[1692]: 2024-09-04 17:32:45.942 [INFO][4398] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:32:45.963804 containerd[1692]: 2024-09-04 17:32:45.960 [WARNING][4398] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" HandleID="k8s-pod-network.08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-csi--node--driver--dw5hs-eth0" Sep 4 17:32:45.963804 containerd[1692]: 2024-09-04 17:32:45.960 [INFO][4398] ipam_plugin.go 445: Releasing address using workloadID ContainerID="08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" HandleID="k8s-pod-network.08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-csi--node--driver--dw5hs-eth0" Sep 4 17:32:45.963804 containerd[1692]: 2024-09-04 17:32:45.961 [INFO][4398] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:32:45.963804 containerd[1692]: 2024-09-04 17:32:45.962 [INFO][4361] k8s.go 621: Teardown processing complete. ContainerID="08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" Sep 4 17:32:45.965851 containerd[1692]: time="2024-09-04T17:32:45.963930527Z" level=info msg="TearDown network for sandbox \"08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef\" successfully" Sep 4 17:32:45.965851 containerd[1692]: time="2024-09-04T17:32:45.963958527Z" level=info msg="StopPodSandbox for \"08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef\" returns successfully" Sep 4 17:32:45.966860 containerd[1692]: time="2024-09-04T17:32:45.966830066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dw5hs,Uid:516fb432-35a0-42b1-a39f-352e51299738,Namespace:calico-system,Attempt:1,}" Sep 4 17:32:45.967459 systemd[1]: run-netns-cni\x2d87aceba2\x2dd216\x2d49c5\x2dfed1\x2d7f4210183ccb.mount: Deactivated successfully. Sep 4 17:32:46.757243 kernel: bpftool[4546]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 17:32:46.776565 containerd[1692]: time="2024-09-04T17:32:46.776512580Z" level=info msg="StopPodSandbox for \"40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73\"" Sep 4 17:32:46.882675 containerd[1692]: 2024-09-04 17:32:46.836 [INFO][4560] k8s.go 608: Cleaning up netns ContainerID="40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" Sep 4 17:32:46.882675 containerd[1692]: 2024-09-04 17:32:46.836 [INFO][4560] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" iface="eth0" netns="/var/run/netns/cni-15a50d15-e7fa-e00d-8f08-530815201f14" Sep 4 17:32:46.882675 containerd[1692]: 2024-09-04 17:32:46.836 [INFO][4560] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" iface="eth0" netns="/var/run/netns/cni-15a50d15-e7fa-e00d-8f08-530815201f14" Sep 4 17:32:46.882675 containerd[1692]: 2024-09-04 17:32:46.837 [INFO][4560] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" iface="eth0" netns="/var/run/netns/cni-15a50d15-e7fa-e00d-8f08-530815201f14" Sep 4 17:32:46.882675 containerd[1692]: 2024-09-04 17:32:46.837 [INFO][4560] k8s.go 615: Releasing IP address(es) ContainerID="40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" Sep 4 17:32:46.882675 containerd[1692]: 2024-09-04 17:32:46.837 [INFO][4560] utils.go 188: Calico CNI releasing IP address ContainerID="40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" Sep 4 17:32:46.882675 containerd[1692]: 2024-09-04 17:32:46.867 [INFO][4567] ipam_plugin.go 417: Releasing address using handleID ContainerID="40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" HandleID="k8s-pod-network.40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--dshhv-eth0" Sep 4 17:32:46.882675 containerd[1692]: 2024-09-04 17:32:46.871 [INFO][4567] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:32:46.882675 containerd[1692]: 2024-09-04 17:32:46.871 [INFO][4567] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:32:46.882675 containerd[1692]: 2024-09-04 17:32:46.878 [WARNING][4567] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" HandleID="k8s-pod-network.40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--dshhv-eth0" Sep 4 17:32:46.882675 containerd[1692]: 2024-09-04 17:32:46.878 [INFO][4567] ipam_plugin.go 445: Releasing address using workloadID ContainerID="40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" HandleID="k8s-pod-network.40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--dshhv-eth0" Sep 4 17:32:46.882675 containerd[1692]: 2024-09-04 17:32:46.879 [INFO][4567] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:32:46.882675 containerd[1692]: 2024-09-04 17:32:46.881 [INFO][4560] k8s.go 621: Teardown processing complete. ContainerID="40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" Sep 4 17:32:46.885943 containerd[1692]: time="2024-09-04T17:32:46.882920728Z" level=info msg="TearDown network for sandbox \"40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73\" successfully" Sep 4 17:32:46.885943 containerd[1692]: time="2024-09-04T17:32:46.882955328Z" level=info msg="StopPodSandbox for \"40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73\" returns successfully" Sep 4 17:32:46.885943 containerd[1692]: time="2024-09-04T17:32:46.885410762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dshhv,Uid:39aed6ce-ad34-481b-9f5d-53f97e2a213b,Namespace:kube-system,Attempt:1,}" Sep 4 17:32:46.887826 systemd[1]: run-netns-cni\x2d15a50d15\x2de7fa\x2de00d\x2d8f08\x2d530815201f14.mount: Deactivated successfully. Sep 4 17:32:47.217229 systemd-networkd[1340]: vxlan.calico: Link UP Sep 4 17:32:47.217253 systemd-networkd[1340]: vxlan.calico: Gained carrier Sep 4 17:32:47.485263 systemd-networkd[1340]: calid1ec02d6813: Link UP Sep 4 17:32:47.485682 systemd-networkd[1340]: calid1ec02d6813: Gained carrier Sep 4 17:32:47.505204 containerd[1692]: 2024-09-04 17:32:47.416 [INFO][4614] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--8jztd-eth0 coredns-76f75df574- kube-system de6c1c61-155f-48e7-a53f-5ba66c45e5f2 691 0 2024-09-04 17:32:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.2.1-a-27f7f2cbdf coredns-76f75df574-8jztd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid1ec02d6813 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f012e9e381dbc8a6e62d41f3c5478d0871c8e43bd78632990c575d24a5071642" Namespace="kube-system" Pod="coredns-76f75df574-8jztd" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--8jztd-" Sep 4 17:32:47.505204 containerd[1692]: 2024-09-04 17:32:47.416 [INFO][4614] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f012e9e381dbc8a6e62d41f3c5478d0871c8e43bd78632990c575d24a5071642" Namespace="kube-system" Pod="coredns-76f75df574-8jztd" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--8jztd-eth0" Sep 4 17:32:47.505204 containerd[1692]: 2024-09-04 17:32:47.444 [INFO][4634] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f012e9e381dbc8a6e62d41f3c5478d0871c8e43bd78632990c575d24a5071642" HandleID="k8s-pod-network.f012e9e381dbc8a6e62d41f3c5478d0871c8e43bd78632990c575d24a5071642" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--8jztd-eth0" Sep 4 17:32:47.505204 containerd[1692]: 2024-09-04 17:32:47.451 [INFO][4634] ipam_plugin.go 270: Auto assigning IP ContainerID="f012e9e381dbc8a6e62d41f3c5478d0871c8e43bd78632990c575d24a5071642" HandleID="k8s-pod-network.f012e9e381dbc8a6e62d41f3c5478d0871c8e43bd78632990c575d24a5071642" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--8jztd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a330), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.2.1-a-27f7f2cbdf", "pod":"coredns-76f75df574-8jztd", "timestamp":"2024-09-04 17:32:47.444229363 +0000 UTC"}, Hostname:"ci-3975.2.1-a-27f7f2cbdf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:32:47.505204 containerd[1692]: 2024-09-04 17:32:47.451 [INFO][4634] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:32:47.505204 containerd[1692]: 2024-09-04 17:32:47.452 [INFO][4634] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:32:47.505204 containerd[1692]: 2024-09-04 17:32:47.452 [INFO][4634] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-a-27f7f2cbdf' Sep 4 17:32:47.505204 containerd[1692]: 2024-09-04 17:32:47.453 [INFO][4634] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f012e9e381dbc8a6e62d41f3c5478d0871c8e43bd78632990c575d24a5071642" host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:47.505204 containerd[1692]: 2024-09-04 17:32:47.457 [INFO][4634] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:47.505204 containerd[1692]: 2024-09-04 17:32:47.462 [INFO][4634] ipam.go 489: Trying affinity for 192.168.25.64/26 host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:47.505204 containerd[1692]: 2024-09-04 17:32:47.463 [INFO][4634] ipam.go 155: Attempting to load block cidr=192.168.25.64/26 host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:47.505204 containerd[1692]: 2024-09-04 17:32:47.466 [INFO][4634] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.25.64/26 host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:47.505204 containerd[1692]: 2024-09-04 17:32:47.466 [INFO][4634] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.25.64/26 handle="k8s-pod-network.f012e9e381dbc8a6e62d41f3c5478d0871c8e43bd78632990c575d24a5071642" host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:47.505204 containerd[1692]: 2024-09-04 17:32:47.468 [INFO][4634] ipam.go 1685: Creating new handle: k8s-pod-network.f012e9e381dbc8a6e62d41f3c5478d0871c8e43bd78632990c575d24a5071642 Sep 4 17:32:47.505204 containerd[1692]: 2024-09-04 17:32:47.471 [INFO][4634] ipam.go 1203: Writing block in order to claim IPs block=192.168.25.64/26 handle="k8s-pod-network.f012e9e381dbc8a6e62d41f3c5478d0871c8e43bd78632990c575d24a5071642" host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:47.505204 containerd[1692]: 2024-09-04 17:32:47.475 [INFO][4634] ipam.go 1216: Successfully claimed IPs: [192.168.25.65/26] block=192.168.25.64/26 handle="k8s-pod-network.f012e9e381dbc8a6e62d41f3c5478d0871c8e43bd78632990c575d24a5071642" host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:47.505204 containerd[1692]: 2024-09-04 17:32:47.475 [INFO][4634] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.25.65/26] handle="k8s-pod-network.f012e9e381dbc8a6e62d41f3c5478d0871c8e43bd78632990c575d24a5071642" host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:47.505204 containerd[1692]: 2024-09-04 17:32:47.475 [INFO][4634] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:32:47.505204 containerd[1692]: 2024-09-04 17:32:47.475 [INFO][4634] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.25.65/26] IPv6=[] ContainerID="f012e9e381dbc8a6e62d41f3c5478d0871c8e43bd78632990c575d24a5071642" HandleID="k8s-pod-network.f012e9e381dbc8a6e62d41f3c5478d0871c8e43bd78632990c575d24a5071642" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--8jztd-eth0" Sep 4 17:32:47.508482 containerd[1692]: 2024-09-04 17:32:47.477 [INFO][4614] k8s.go 386: Populated endpoint ContainerID="f012e9e381dbc8a6e62d41f3c5478d0871c8e43bd78632990c575d24a5071642" Namespace="kube-system" Pod="coredns-76f75df574-8jztd" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--8jztd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--8jztd-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"de6c1c61-155f-48e7-a53f-5ba66c45e5f2", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-27f7f2cbdf", ContainerID:"", Pod:"coredns-76f75df574-8jztd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid1ec02d6813", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:32:47.508482 containerd[1692]: 2024-09-04 17:32:47.477 [INFO][4614] k8s.go 387: Calico CNI using IPs: [192.168.25.65/32] ContainerID="f012e9e381dbc8a6e62d41f3c5478d0871c8e43bd78632990c575d24a5071642" Namespace="kube-system" Pod="coredns-76f75df574-8jztd" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--8jztd-eth0" Sep 4 17:32:47.508482 containerd[1692]: 2024-09-04 17:32:47.478 [INFO][4614] dataplane_linux.go 68: Setting the host side veth name to calid1ec02d6813 ContainerID="f012e9e381dbc8a6e62d41f3c5478d0871c8e43bd78632990c575d24a5071642" Namespace="kube-system" Pod="coredns-76f75df574-8jztd" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--8jztd-eth0" Sep 4 17:32:47.508482 containerd[1692]: 2024-09-04 17:32:47.486 [INFO][4614] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f012e9e381dbc8a6e62d41f3c5478d0871c8e43bd78632990c575d24a5071642" Namespace="kube-system" Pod="coredns-76f75df574-8jztd" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--8jztd-eth0" Sep 4 17:32:47.508482 containerd[1692]: 2024-09-04 17:32:47.488 [INFO][4614] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f012e9e381dbc8a6e62d41f3c5478d0871c8e43bd78632990c575d24a5071642" Namespace="kube-system" Pod="coredns-76f75df574-8jztd" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--8jztd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--8jztd-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"de6c1c61-155f-48e7-a53f-5ba66c45e5f2", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-27f7f2cbdf", ContainerID:"f012e9e381dbc8a6e62d41f3c5478d0871c8e43bd78632990c575d24a5071642", Pod:"coredns-76f75df574-8jztd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid1ec02d6813", MAC:"8e:8e:e8:8d:70:41", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:32:47.508482 containerd[1692]: 2024-09-04 17:32:47.501 [INFO][4614] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f012e9e381dbc8a6e62d41f3c5478d0871c8e43bd78632990c575d24a5071642" Namespace="kube-system" Pod="coredns-76f75df574-8jztd" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--8jztd-eth0" Sep 4 17:32:47.715788 systemd-networkd[1340]: cali3b655f54aa5: Link UP Sep 4 17:32:47.717368 systemd-networkd[1340]: cali3b655f54aa5: Gained carrier Sep 4 17:32:47.740984 containerd[1692]: 2024-09-04 17:32:47.556 [INFO][4652] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--a--27f7f2cbdf-k8s-calico--kube--controllers--7c9db4776--46qgr-eth0 calico-kube-controllers-7c9db4776- calico-system 87a16e16-c2e6-4905-8da8-5de334819488 692 0 2024-09-04 17:32:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7c9db4776 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3975.2.1-a-27f7f2cbdf calico-kube-controllers-7c9db4776-46qgr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3b655f54aa5 [] []}} ContainerID="5a9ad33850aa86a5b25c8d288895e423d450b84f734140e7562fa4385ed61cb1" Namespace="calico-system" Pod="calico-kube-controllers-7c9db4776-46qgr" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--kube--controllers--7c9db4776--46qgr-" Sep 4 17:32:47.740984 containerd[1692]: 2024-09-04 17:32:47.556 [INFO][4652] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5a9ad33850aa86a5b25c8d288895e423d450b84f734140e7562fa4385ed61cb1" Namespace="calico-system" Pod="calico-kube-controllers-7c9db4776-46qgr" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--kube--controllers--7c9db4776--46qgr-eth0" Sep 4 17:32:47.740984 containerd[1692]: 2024-09-04 17:32:47.631 [INFO][4667] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a9ad33850aa86a5b25c8d288895e423d450b84f734140e7562fa4385ed61cb1" HandleID="k8s-pod-network.5a9ad33850aa86a5b25c8d288895e423d450b84f734140e7562fa4385ed61cb1" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--kube--controllers--7c9db4776--46qgr-eth0" Sep 4 17:32:47.740984 containerd[1692]: 2024-09-04 17:32:47.644 [INFO][4667] ipam_plugin.go 270: Auto assigning IP ContainerID="5a9ad33850aa86a5b25c8d288895e423d450b84f734140e7562fa4385ed61cb1" HandleID="k8s-pod-network.5a9ad33850aa86a5b25c8d288895e423d450b84f734140e7562fa4385ed61cb1" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--kube--controllers--7c9db4776--46qgr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003181b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.2.1-a-27f7f2cbdf", "pod":"calico-kube-controllers-7c9db4776-46qgr", "timestamp":"2024-09-04 17:32:47.63148871 +0000 UTC"}, Hostname:"ci-3975.2.1-a-27f7f2cbdf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:32:47.740984 containerd[1692]: 2024-09-04 17:32:47.644 [INFO][4667] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:32:47.740984 containerd[1692]: 2024-09-04 17:32:47.644 [INFO][4667] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:32:47.740984 containerd[1692]: 2024-09-04 17:32:47.644 [INFO][4667] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-a-27f7f2cbdf' Sep 4 17:32:47.740984 containerd[1692]: 2024-09-04 17:32:47.647 [INFO][4667] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5a9ad33850aa86a5b25c8d288895e423d450b84f734140e7562fa4385ed61cb1" host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:47.740984 containerd[1692]: 2024-09-04 17:32:47.653 [INFO][4667] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:47.740984 containerd[1692]: 2024-09-04 17:32:47.666 [INFO][4667] ipam.go 489: Trying affinity for 192.168.25.64/26 host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:47.740984 containerd[1692]: 2024-09-04 17:32:47.686 [INFO][4667] ipam.go 155: Attempting to load block cidr=192.168.25.64/26 host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:47.740984 containerd[1692]: 2024-09-04 17:32:47.689 [INFO][4667] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.25.64/26 host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:47.740984 containerd[1692]: 2024-09-04 17:32:47.689 [INFO][4667] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.25.64/26 handle="k8s-pod-network.5a9ad33850aa86a5b25c8d288895e423d450b84f734140e7562fa4385ed61cb1" host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:47.740984 containerd[1692]: 2024-09-04 17:32:47.693 [INFO][4667] ipam.go 1685: Creating new handle: k8s-pod-network.5a9ad33850aa86a5b25c8d288895e423d450b84f734140e7562fa4385ed61cb1 Sep 4 17:32:47.740984 containerd[1692]: 2024-09-04 17:32:47.699 [INFO][4667] ipam.go 1203: Writing block in order to claim IPs block=192.168.25.64/26 handle="k8s-pod-network.5a9ad33850aa86a5b25c8d288895e423d450b84f734140e7562fa4385ed61cb1" host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:47.740984 containerd[1692]: 2024-09-04 17:32:47.705 [INFO][4667] ipam.go 1216: Successfully claimed IPs: [192.168.25.66/26] block=192.168.25.64/26 handle="k8s-pod-network.5a9ad33850aa86a5b25c8d288895e423d450b84f734140e7562fa4385ed61cb1" host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:47.740984 containerd[1692]: 2024-09-04 17:32:47.705 [INFO][4667] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.25.66/26] handle="k8s-pod-network.5a9ad33850aa86a5b25c8d288895e423d450b84f734140e7562fa4385ed61cb1" host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:47.740984 containerd[1692]: 2024-09-04 17:32:47.705 [INFO][4667] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:32:47.740984 containerd[1692]: 2024-09-04 17:32:47.705 [INFO][4667] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.25.66/26] IPv6=[] ContainerID="5a9ad33850aa86a5b25c8d288895e423d450b84f734140e7562fa4385ed61cb1" HandleID="k8s-pod-network.5a9ad33850aa86a5b25c8d288895e423d450b84f734140e7562fa4385ed61cb1" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--kube--controllers--7c9db4776--46qgr-eth0" Sep 4 17:32:47.742073 containerd[1692]: 2024-09-04 17:32:47.708 [INFO][4652] k8s.go 386: Populated endpoint ContainerID="5a9ad33850aa86a5b25c8d288895e423d450b84f734140e7562fa4385ed61cb1" Namespace="calico-system" Pod="calico-kube-controllers-7c9db4776-46qgr" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--kube--controllers--7c9db4776--46qgr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--27f7f2cbdf-k8s-calico--kube--controllers--7c9db4776--46qgr-eth0", GenerateName:"calico-kube-controllers-7c9db4776-", Namespace:"calico-system", SelfLink:"", UID:"87a16e16-c2e6-4905-8da8-5de334819488", ResourceVersion:"692", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c9db4776", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-27f7f2cbdf", ContainerID:"", Pod:"calico-kube-controllers-7c9db4776-46qgr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.25.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3b655f54aa5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:32:47.742073 containerd[1692]: 2024-09-04 17:32:47.708 [INFO][4652] k8s.go 387: Calico CNI using IPs: [192.168.25.66/32] ContainerID="5a9ad33850aa86a5b25c8d288895e423d450b84f734140e7562fa4385ed61cb1" Namespace="calico-system" Pod="calico-kube-controllers-7c9db4776-46qgr" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--kube--controllers--7c9db4776--46qgr-eth0" Sep 4 17:32:47.742073 containerd[1692]: 2024-09-04 17:32:47.708 [INFO][4652] dataplane_linux.go 68: Setting the host side veth name to cali3b655f54aa5 ContainerID="5a9ad33850aa86a5b25c8d288895e423d450b84f734140e7562fa4385ed61cb1" Namespace="calico-system" Pod="calico-kube-controllers-7c9db4776-46qgr" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--kube--controllers--7c9db4776--46qgr-eth0" Sep 4 17:32:47.742073 containerd[1692]: 2024-09-04 17:32:47.711 [INFO][4652] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5a9ad33850aa86a5b25c8d288895e423d450b84f734140e7562fa4385ed61cb1" Namespace="calico-system" Pod="calico-kube-controllers-7c9db4776-46qgr" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--kube--controllers--7c9db4776--46qgr-eth0" Sep 4 17:32:47.742073 containerd[1692]: 2024-09-04 17:32:47.711 [INFO][4652] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5a9ad33850aa86a5b25c8d288895e423d450b84f734140e7562fa4385ed61cb1" Namespace="calico-system" Pod="calico-kube-controllers-7c9db4776-46qgr" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--kube--controllers--7c9db4776--46qgr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--27f7f2cbdf-k8s-calico--kube--controllers--7c9db4776--46qgr-eth0", GenerateName:"calico-kube-controllers-7c9db4776-", Namespace:"calico-system", SelfLink:"", UID:"87a16e16-c2e6-4905-8da8-5de334819488", ResourceVersion:"692", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c9db4776", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-27f7f2cbdf", ContainerID:"5a9ad33850aa86a5b25c8d288895e423d450b84f734140e7562fa4385ed61cb1", Pod:"calico-kube-controllers-7c9db4776-46qgr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.25.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3b655f54aa5", MAC:"72:c1:99:dd:dc:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:32:47.742073 containerd[1692]: 2024-09-04 17:32:47.736 [INFO][4652] k8s.go 500: Wrote updated endpoint to datastore ContainerID="5a9ad33850aa86a5b25c8d288895e423d450b84f734140e7562fa4385ed61cb1" Namespace="calico-system" Pod="calico-kube-controllers-7c9db4776-46qgr" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--kube--controllers--7c9db4776--46qgr-eth0" Sep 4 17:32:47.767505 containerd[1692]: time="2024-09-04T17:32:47.766154342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:32:47.767505 containerd[1692]: time="2024-09-04T17:32:47.767212357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:47.767505 containerd[1692]: time="2024-09-04T17:32:47.767281958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:32:47.767505 containerd[1692]: time="2024-09-04T17:32:47.767335158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:47.788602 systemd-networkd[1340]: calid4e97d99ddd: Link UP Sep 4 17:32:47.792493 systemd-networkd[1340]: calid4e97d99ddd: Gained carrier Sep 4 17:32:47.827385 systemd[1]: Started cri-containerd-f012e9e381dbc8a6e62d41f3c5478d0871c8e43bd78632990c575d24a5071642.scope - libcontainer container f012e9e381dbc8a6e62d41f3c5478d0871c8e43bd78632990c575d24a5071642. Sep 4 17:32:47.832936 containerd[1692]: 2024-09-04 17:32:47.622 [INFO][4668] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--a--27f7f2cbdf-k8s-csi--node--driver--dw5hs-eth0 csi-node-driver- calico-system 516fb432-35a0-42b1-a39f-352e51299738 693 0 2024-09-04 17:32:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3975.2.1-a-27f7f2cbdf csi-node-driver-dw5hs eth0 default [] [] [kns.calico-system ksa.calico-system.default] calid4e97d99ddd [] []}} ContainerID="0017a301c0789fe9d39834abb1ee3d9651eb6b519a9ade5dd8641082a04d14e2" Namespace="calico-system" Pod="csi-node-driver-dw5hs" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-csi--node--driver--dw5hs-" Sep 4 17:32:47.832936 containerd[1692]: 2024-09-04 17:32:47.622 [INFO][4668] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0017a301c0789fe9d39834abb1ee3d9651eb6b519a9ade5dd8641082a04d14e2" Namespace="calico-system" Pod="csi-node-driver-dw5hs" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-csi--node--driver--dw5hs-eth0" Sep 4 17:32:47.832936 containerd[1692]: 2024-09-04 17:32:47.689 [INFO][4692] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0017a301c0789fe9d39834abb1ee3d9651eb6b519a9ade5dd8641082a04d14e2" HandleID="k8s-pod-network.0017a301c0789fe9d39834abb1ee3d9651eb6b519a9ade5dd8641082a04d14e2" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-csi--node--driver--dw5hs-eth0" Sep 4 17:32:47.832936 containerd[1692]: 2024-09-04 17:32:47.706 [INFO][4692] ipam_plugin.go 270: Auto assigning IP ContainerID="0017a301c0789fe9d39834abb1ee3d9651eb6b519a9ade5dd8641082a04d14e2" HandleID="k8s-pod-network.0017a301c0789fe9d39834abb1ee3d9651eb6b519a9ade5dd8641082a04d14e2" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-csi--node--driver--dw5hs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318260), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.2.1-a-27f7f2cbdf", "pod":"csi-node-driver-dw5hs", "timestamp":"2024-09-04 17:32:47.689583101 +0000 UTC"}, Hostname:"ci-3975.2.1-a-27f7f2cbdf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:32:47.832936 containerd[1692]: 2024-09-04 17:32:47.706 [INFO][4692] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:32:47.832936 containerd[1692]: 2024-09-04 17:32:47.706 [INFO][4692] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:32:47.832936 containerd[1692]: 2024-09-04 17:32:47.706 [INFO][4692] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-a-27f7f2cbdf' Sep 4 17:32:47.832936 containerd[1692]: 2024-09-04 17:32:47.708 [INFO][4692] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0017a301c0789fe9d39834abb1ee3d9651eb6b519a9ade5dd8641082a04d14e2" host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:47.832936 containerd[1692]: 2024-09-04 17:32:47.726 [INFO][4692] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:47.832936 containerd[1692]: 2024-09-04 17:32:47.741 [INFO][4692] ipam.go 489: Trying affinity for 192.168.25.64/26 host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:47.832936 containerd[1692]: 2024-09-04 17:32:47.746 [INFO][4692] ipam.go 155: Attempting to load block cidr=192.168.25.64/26 host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:47.832936 containerd[1692]: 2024-09-04 17:32:47.752 [INFO][4692] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.25.64/26 host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:47.832936 containerd[1692]: 2024-09-04 17:32:47.752 [INFO][4692] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.25.64/26 handle="k8s-pod-network.0017a301c0789fe9d39834abb1ee3d9651eb6b519a9ade5dd8641082a04d14e2" host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:47.832936 containerd[1692]: 2024-09-04 17:32:47.755 [INFO][4692] ipam.go 1685: Creating new handle: k8s-pod-network.0017a301c0789fe9d39834abb1ee3d9651eb6b519a9ade5dd8641082a04d14e2 Sep 4 17:32:47.832936 containerd[1692]: 2024-09-04 17:32:47.761 [INFO][4692] ipam.go 1203: Writing block in order to claim IPs block=192.168.25.64/26 handle="k8s-pod-network.0017a301c0789fe9d39834abb1ee3d9651eb6b519a9ade5dd8641082a04d14e2" host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:47.832936 containerd[1692]: 2024-09-04 17:32:47.769 [INFO][4692] ipam.go 1216: Successfully claimed IPs: [192.168.25.67/26] block=192.168.25.64/26 handle="k8s-pod-network.0017a301c0789fe9d39834abb1ee3d9651eb6b519a9ade5dd8641082a04d14e2" host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:47.832936 containerd[1692]: 2024-09-04 17:32:47.770 [INFO][4692] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.25.67/26] handle="k8s-pod-network.0017a301c0789fe9d39834abb1ee3d9651eb6b519a9ade5dd8641082a04d14e2" host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:47.832936 containerd[1692]: 2024-09-04 17:32:47.770 [INFO][4692] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:32:47.832936 containerd[1692]: 2024-09-04 17:32:47.771 [INFO][4692] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.25.67/26] IPv6=[] ContainerID="0017a301c0789fe9d39834abb1ee3d9651eb6b519a9ade5dd8641082a04d14e2" HandleID="k8s-pod-network.0017a301c0789fe9d39834abb1ee3d9651eb6b519a9ade5dd8641082a04d14e2" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-csi--node--driver--dw5hs-eth0" Sep 4 17:32:47.834069 containerd[1692]: 2024-09-04 17:32:47.778 [INFO][4668] k8s.go 386: Populated endpoint ContainerID="0017a301c0789fe9d39834abb1ee3d9651eb6b519a9ade5dd8641082a04d14e2" Namespace="calico-system" Pod="csi-node-driver-dw5hs" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-csi--node--driver--dw5hs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--27f7f2cbdf-k8s-csi--node--driver--dw5hs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"516fb432-35a0-42b1-a39f-352e51299738", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-27f7f2cbdf", ContainerID:"", Pod:"csi-node-driver-dw5hs", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.25.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid4e97d99ddd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:32:47.834069 containerd[1692]: 2024-09-04 17:32:47.779 [INFO][4668] k8s.go 387: Calico CNI using IPs: [192.168.25.67/32] ContainerID="0017a301c0789fe9d39834abb1ee3d9651eb6b519a9ade5dd8641082a04d14e2" Namespace="calico-system" Pod="csi-node-driver-dw5hs" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-csi--node--driver--dw5hs-eth0" Sep 4 17:32:47.834069 containerd[1692]: 2024-09-04 17:32:47.779 [INFO][4668] dataplane_linux.go 68: Setting the host side veth name to calid4e97d99ddd ContainerID="0017a301c0789fe9d39834abb1ee3d9651eb6b519a9ade5dd8641082a04d14e2" Namespace="calico-system" Pod="csi-node-driver-dw5hs" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-csi--node--driver--dw5hs-eth0" Sep 4 17:32:47.834069 containerd[1692]: 2024-09-04 17:32:47.794 [INFO][4668] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0017a301c0789fe9d39834abb1ee3d9651eb6b519a9ade5dd8641082a04d14e2" Namespace="calico-system" Pod="csi-node-driver-dw5hs" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-csi--node--driver--dw5hs-eth0" Sep 4 17:32:47.834069 containerd[1692]: 2024-09-04 17:32:47.797 [INFO][4668] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0017a301c0789fe9d39834abb1ee3d9651eb6b519a9ade5dd8641082a04d14e2" Namespace="calico-system" Pod="csi-node-driver-dw5hs" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-csi--node--driver--dw5hs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--27f7f2cbdf-k8s-csi--node--driver--dw5hs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"516fb432-35a0-42b1-a39f-352e51299738", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-27f7f2cbdf", ContainerID:"0017a301c0789fe9d39834abb1ee3d9651eb6b519a9ade5dd8641082a04d14e2", Pod:"csi-node-driver-dw5hs", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.25.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid4e97d99ddd", MAC:"e6:54:bf:c2:6a:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:32:47.834069 containerd[1692]: 2024-09-04 17:32:47.825 [INFO][4668] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0017a301c0789fe9d39834abb1ee3d9651eb6b519a9ade5dd8641082a04d14e2" Namespace="calico-system" Pod="csi-node-driver-dw5hs" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-csi--node--driver--dw5hs-eth0" Sep 4 17:32:47.932111 containerd[1692]: time="2024-09-04T17:32:47.930711481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:32:47.932111 containerd[1692]: time="2024-09-04T17:32:47.930788382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:47.932111 containerd[1692]: time="2024-09-04T17:32:47.930823482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:32:47.932111 containerd[1692]: time="2024-09-04T17:32:47.930852783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:47.978968 systemd[1]: Started cri-containerd-5a9ad33850aa86a5b25c8d288895e423d450b84f734140e7562fa4385ed61cb1.scope - libcontainer container 5a9ad33850aa86a5b25c8d288895e423d450b84f734140e7562fa4385ed61cb1. Sep 4 17:32:47.993255 containerd[1692]: time="2024-09-04T17:32:47.991015201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:32:47.994038 containerd[1692]: time="2024-09-04T17:32:47.993574536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:47.995294 containerd[1692]: time="2024-09-04T17:32:47.994979055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:32:47.996329 containerd[1692]: time="2024-09-04T17:32:47.995265759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:48.012343 containerd[1692]: time="2024-09-04T17:32:48.012197989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8jztd,Uid:de6c1c61-155f-48e7-a53f-5ba66c45e5f2,Namespace:kube-system,Attempt:1,} returns sandbox id \"f012e9e381dbc8a6e62d41f3c5478d0871c8e43bd78632990c575d24a5071642\"" Sep 4 17:32:48.021279 containerd[1692]: time="2024-09-04T17:32:48.021240212Z" level=info msg="CreateContainer within sandbox \"f012e9e381dbc8a6e62d41f3c5478d0871c8e43bd78632990c575d24a5071642\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:32:48.047470 systemd[1]: Started cri-containerd-0017a301c0789fe9d39834abb1ee3d9651eb6b519a9ade5dd8641082a04d14e2.scope - libcontainer container 0017a301c0789fe9d39834abb1ee3d9651eb6b519a9ade5dd8641082a04d14e2. Sep 4 17:32:48.092633 containerd[1692]: time="2024-09-04T17:32:48.092588283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c9db4776-46qgr,Uid:87a16e16-c2e6-4905-8da8-5de334819488,Namespace:calico-system,Attempt:1,} returns sandbox id \"5a9ad33850aa86a5b25c8d288895e423d450b84f734140e7562fa4385ed61cb1\"" Sep 4 17:32:48.100773 containerd[1692]: time="2024-09-04T17:32:48.100279487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 17:32:48.114898 containerd[1692]: time="2024-09-04T17:32:48.114849585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dw5hs,Uid:516fb432-35a0-42b1-a39f-352e51299738,Namespace:calico-system,Attempt:1,} returns sandbox id \"0017a301c0789fe9d39834abb1ee3d9651eb6b519a9ade5dd8641082a04d14e2\"" Sep 4 17:32:48.135430 systemd-networkd[1340]: cali70c26411fbb: Link UP Sep 4 17:32:48.137070 systemd-networkd[1340]: cali70c26411fbb: Gained carrier Sep 4 17:32:48.150403 containerd[1692]: 2024-09-04 17:32:47.926 [INFO][4751] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--dshhv-eth0 coredns-76f75df574- kube-system 39aed6ce-ad34-481b-9f5d-53f97e2a213b 697 0 2024-09-04 17:32:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.2.1-a-27f7f2cbdf coredns-76f75df574-dshhv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali70c26411fbb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="02b74497b9a57912e7fc3c671ce691ee7cb3f83dafce25e72f8d94fd6de6537e" Namespace="kube-system" Pod="coredns-76f75df574-dshhv" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--dshhv-" Sep 4 17:32:48.150403 containerd[1692]: 2024-09-04 17:32:47.926 [INFO][4751] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="02b74497b9a57912e7fc3c671ce691ee7cb3f83dafce25e72f8d94fd6de6537e" Namespace="kube-system" Pod="coredns-76f75df574-dshhv" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--dshhv-eth0" Sep 4 17:32:48.150403 containerd[1692]: 2024-09-04 17:32:48.056 [INFO][4811] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02b74497b9a57912e7fc3c671ce691ee7cb3f83dafce25e72f8d94fd6de6537e" HandleID="k8s-pod-network.02b74497b9a57912e7fc3c671ce691ee7cb3f83dafce25e72f8d94fd6de6537e" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--dshhv-eth0" Sep 4 17:32:48.150403 containerd[1692]: 2024-09-04 17:32:48.079 [INFO][4811] ipam_plugin.go 270: Auto assigning IP ContainerID="02b74497b9a57912e7fc3c671ce691ee7cb3f83dafce25e72f8d94fd6de6537e" HandleID="k8s-pod-network.02b74497b9a57912e7fc3c671ce691ee7cb3f83dafce25e72f8d94fd6de6537e" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--dshhv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004fe580), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.2.1-a-27f7f2cbdf", "pod":"coredns-76f75df574-dshhv", "timestamp":"2024-09-04 17:32:48.056558393 +0000 UTC"}, Hostname:"ci-3975.2.1-a-27f7f2cbdf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:32:48.150403 containerd[1692]: 2024-09-04 17:32:48.079 [INFO][4811] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:32:48.150403 containerd[1692]: 2024-09-04 17:32:48.079 [INFO][4811] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:32:48.150403 containerd[1692]: 2024-09-04 17:32:48.079 [INFO][4811] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-a-27f7f2cbdf' Sep 4 17:32:48.150403 containerd[1692]: 2024-09-04 17:32:48.082 [INFO][4811] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.02b74497b9a57912e7fc3c671ce691ee7cb3f83dafce25e72f8d94fd6de6537e" host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:48.150403 containerd[1692]: 2024-09-04 17:32:48.088 [INFO][4811] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:48.150403 containerd[1692]: 2024-09-04 17:32:48.095 [INFO][4811] ipam.go 489: Trying affinity for 192.168.25.64/26 host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:48.150403 containerd[1692]: 2024-09-04 17:32:48.098 [INFO][4811] ipam.go 155: Attempting to load block cidr=192.168.25.64/26 host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:48.150403 containerd[1692]: 2024-09-04 17:32:48.106 [INFO][4811] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.25.64/26 host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:48.150403 containerd[1692]: 2024-09-04 17:32:48.106 [INFO][4811] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.25.64/26 handle="k8s-pod-network.02b74497b9a57912e7fc3c671ce691ee7cb3f83dafce25e72f8d94fd6de6537e" host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:48.150403 containerd[1692]: 2024-09-04 17:32:48.109 [INFO][4811] ipam.go 1685: Creating new handle: k8s-pod-network.02b74497b9a57912e7fc3c671ce691ee7cb3f83dafce25e72f8d94fd6de6537e Sep 4 17:32:48.150403 containerd[1692]: 2024-09-04 17:32:48.118 [INFO][4811] ipam.go 1203: Writing block in order to claim IPs block=192.168.25.64/26 handle="k8s-pod-network.02b74497b9a57912e7fc3c671ce691ee7cb3f83dafce25e72f8d94fd6de6537e" host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:48.150403 containerd[1692]: 2024-09-04 17:32:48.127 [INFO][4811] ipam.go 1216: Successfully claimed IPs: [192.168.25.68/26] block=192.168.25.64/26 handle="k8s-pod-network.02b74497b9a57912e7fc3c671ce691ee7cb3f83dafce25e72f8d94fd6de6537e" host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:48.150403 containerd[1692]: 2024-09-04 17:32:48.128 [INFO][4811] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.25.68/26] handle="k8s-pod-network.02b74497b9a57912e7fc3c671ce691ee7cb3f83dafce25e72f8d94fd6de6537e" host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:32:48.150403 containerd[1692]: 2024-09-04 17:32:48.128 [INFO][4811] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:32:48.150403 containerd[1692]: 2024-09-04 17:32:48.128 [INFO][4811] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.25.68/26] IPv6=[] ContainerID="02b74497b9a57912e7fc3c671ce691ee7cb3f83dafce25e72f8d94fd6de6537e" HandleID="k8s-pod-network.02b74497b9a57912e7fc3c671ce691ee7cb3f83dafce25e72f8d94fd6de6537e" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--dshhv-eth0" Sep 4 17:32:48.151349 containerd[1692]: 2024-09-04 17:32:48.131 [INFO][4751] k8s.go 386: Populated endpoint ContainerID="02b74497b9a57912e7fc3c671ce691ee7cb3f83dafce25e72f8d94fd6de6537e" Namespace="kube-system" Pod="coredns-76f75df574-dshhv" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--dshhv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--dshhv-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"39aed6ce-ad34-481b-9f5d-53f97e2a213b", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-27f7f2cbdf", ContainerID:"", Pod:"coredns-76f75df574-dshhv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali70c26411fbb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:32:48.151349 containerd[1692]: 2024-09-04 17:32:48.131 [INFO][4751] k8s.go 387: Calico CNI using IPs: [192.168.25.68/32] ContainerID="02b74497b9a57912e7fc3c671ce691ee7cb3f83dafce25e72f8d94fd6de6537e" Namespace="kube-system" Pod="coredns-76f75df574-dshhv" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--dshhv-eth0" Sep 4 17:32:48.151349 containerd[1692]: 2024-09-04 17:32:48.131 [INFO][4751] dataplane_linux.go 68: Setting the host side veth name to cali70c26411fbb ContainerID="02b74497b9a57912e7fc3c671ce691ee7cb3f83dafce25e72f8d94fd6de6537e" Namespace="kube-system" Pod="coredns-76f75df574-dshhv" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--dshhv-eth0" Sep 4 17:32:48.151349 containerd[1692]: 2024-09-04 17:32:48.134 [INFO][4751] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="02b74497b9a57912e7fc3c671ce691ee7cb3f83dafce25e72f8d94fd6de6537e" Namespace="kube-system" Pod="coredns-76f75df574-dshhv" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--dshhv-eth0" Sep 4 17:32:48.151349 containerd[1692]: 2024-09-04 17:32:48.135 [INFO][4751] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="02b74497b9a57912e7fc3c671ce691ee7cb3f83dafce25e72f8d94fd6de6537e" Namespace="kube-system" Pod="coredns-76f75df574-dshhv" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--dshhv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--dshhv-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"39aed6ce-ad34-481b-9f5d-53f97e2a213b", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-27f7f2cbdf", ContainerID:"02b74497b9a57912e7fc3c671ce691ee7cb3f83dafce25e72f8d94fd6de6537e", Pod:"coredns-76f75df574-dshhv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali70c26411fbb", MAC:"be:74:1b:1c:13:fc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:32:48.151349 containerd[1692]: 2024-09-04 17:32:48.147 [INFO][4751] k8s.go 500: Wrote updated endpoint to datastore ContainerID="02b74497b9a57912e7fc3c671ce691ee7cb3f83dafce25e72f8d94fd6de6537e" Namespace="kube-system" Pod="coredns-76f75df574-dshhv" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--dshhv-eth0" Sep 4 17:32:48.240284 containerd[1692]: time="2024-09-04T17:32:48.239967287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:32:48.240284 containerd[1692]: time="2024-09-04T17:32:48.240056389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:48.240284 containerd[1692]: time="2024-09-04T17:32:48.240083689Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:32:48.240284 containerd[1692]: time="2024-09-04T17:32:48.240113589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:48.295302 systemd[1]: Started cri-containerd-02b74497b9a57912e7fc3c671ce691ee7cb3f83dafce25e72f8d94fd6de6537e.scope - libcontainer container 02b74497b9a57912e7fc3c671ce691ee7cb3f83dafce25e72f8d94fd6de6537e. Sep 4 17:32:48.335976 containerd[1692]: time="2024-09-04T17:32:48.335830491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dshhv,Uid:39aed6ce-ad34-481b-9f5d-53f97e2a213b,Namespace:kube-system,Attempt:1,} returns sandbox id \"02b74497b9a57912e7fc3c671ce691ee7cb3f83dafce25e72f8d94fd6de6537e\"" Sep 4 17:32:48.340690 containerd[1692]: time="2024-09-04T17:32:48.340652457Z" level=info msg="CreateContainer within sandbox \"02b74497b9a57912e7fc3c671ce691ee7cb3f83dafce25e72f8d94fd6de6537e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:32:48.521207 containerd[1692]: time="2024-09-04T17:32:48.521110412Z" level=info msg="CreateContainer within sandbox \"f012e9e381dbc8a6e62d41f3c5478d0871c8e43bd78632990c575d24a5071642\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4be3a4c3db38efa53779bf7a7c9826c92e3a66dd5b51eff6e177595433b38798\"" Sep 4 17:32:48.521984 containerd[1692]: time="2024-09-04T17:32:48.521863522Z" level=info msg="StartContainer for \"4be3a4c3db38efa53779bf7a7c9826c92e3a66dd5b51eff6e177595433b38798\"" Sep 4 17:32:48.588332 systemd[1]: Started cri-containerd-4be3a4c3db38efa53779bf7a7c9826c92e3a66dd5b51eff6e177595433b38798.scope - libcontainer container 4be3a4c3db38efa53779bf7a7c9826c92e3a66dd5b51eff6e177595433b38798. Sep 4 17:32:48.726346 containerd[1692]: time="2024-09-04T17:32:48.726244802Z" level=info msg="StartContainer for \"4be3a4c3db38efa53779bf7a7c9826c92e3a66dd5b51eff6e177595433b38798\" returns successfully" Sep 4 17:32:48.758323 systemd-networkd[1340]: vxlan.calico: Gained IPv6LL Sep 4 17:32:48.823441 systemd-networkd[1340]: calid1ec02d6813: Gained IPv6LL Sep 4 17:32:48.921355 containerd[1692]: time="2024-09-04T17:32:48.921102353Z" level=info msg="CreateContainer within sandbox \"02b74497b9a57912e7fc3c671ce691ee7cb3f83dafce25e72f8d94fd6de6537e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"37af9be0c60339e4795ac2834a71e0f01604b83fda95497bc3977f34845388fa\"" Sep 4 17:32:48.922202 containerd[1692]: time="2024-09-04T17:32:48.922151767Z" level=info msg="StartContainer for \"37af9be0c60339e4795ac2834a71e0f01604b83fda95497bc3977f34845388fa\"" Sep 4 17:32:48.982356 systemd[1]: Started cri-containerd-37af9be0c60339e4795ac2834a71e0f01604b83fda95497bc3977f34845388fa.scope - libcontainer container 37af9be0c60339e4795ac2834a71e0f01604b83fda95497bc3977f34845388fa. Sep 4 17:32:49.014105 kubelet[3282]: I0904 17:32:49.014060 3282 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-8jztd" podStartSLOduration=38.013993616 podStartE2EDuration="38.013993616s" podCreationTimestamp="2024-09-04 17:32:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:32:49.01208189 +0000 UTC m=+51.812080508" watchObservedRunningTime="2024-09-04 17:32:49.013993616 +0000 UTC m=+51.813992634" Sep 4 17:32:49.032596 containerd[1692]: time="2024-09-04T17:32:49.030883246Z" level=info msg="StartContainer for \"37af9be0c60339e4795ac2834a71e0f01604b83fda95497bc3977f34845388fa\" returns successfully" Sep 4 17:32:49.462361 systemd-networkd[1340]: calid4e97d99ddd: Gained IPv6LL Sep 4 17:32:49.526779 systemd-networkd[1340]: cali3b655f54aa5: Gained IPv6LL Sep 4 17:32:50.028753 kubelet[3282]: I0904 17:32:50.028701 3282 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-dshhv" podStartSLOduration=39.028645036 podStartE2EDuration="39.028645036s" podCreationTimestamp="2024-09-04 17:32:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:32:50.012758571 +0000 UTC m=+52.812757089" watchObservedRunningTime="2024-09-04 17:32:50.028645036 +0000 UTC m=+52.828643554" Sep 4 17:32:50.102370 systemd-networkd[1340]: cali70c26411fbb: Gained IPv6LL Sep 4 17:32:51.833073 containerd[1692]: time="2024-09-04T17:32:51.833002769Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:51.837537 containerd[1692]: time="2024-09-04T17:32:51.837474343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Sep 4 17:32:51.841232 containerd[1692]: time="2024-09-04T17:32:51.840822299Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:51.845471 containerd[1692]: time="2024-09-04T17:32:51.845430575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:51.847390 containerd[1692]: time="2024-09-04T17:32:51.846801598Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 3.74647321s" Sep 4 17:32:51.847390 containerd[1692]: time="2024-09-04T17:32:51.846845899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Sep 4 17:32:51.849241 containerd[1692]: time="2024-09-04T17:32:51.848752131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 17:32:51.874091 containerd[1692]: time="2024-09-04T17:32:51.874040952Z" level=info msg="CreateContainer within sandbox \"5a9ad33850aa86a5b25c8d288895e423d450b84f734140e7562fa4385ed61cb1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 17:32:51.918184 containerd[1692]: time="2024-09-04T17:32:51.917990783Z" level=info msg="CreateContainer within sandbox \"5a9ad33850aa86a5b25c8d288895e423d450b84f734140e7562fa4385ed61cb1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f905048a252af60a86ede11b544f75fb91b1c6060590b42e094b35696511aceb\"" Sep 4 17:32:51.919309 containerd[1692]: time="2024-09-04T17:32:51.919272605Z" level=info msg="StartContainer for \"f905048a252af60a86ede11b544f75fb91b1c6060590b42e094b35696511aceb\"" Sep 4 17:32:51.965327 systemd[1]: Started cri-containerd-f905048a252af60a86ede11b544f75fb91b1c6060590b42e094b35696511aceb.scope - libcontainer container f905048a252af60a86ede11b544f75fb91b1c6060590b42e094b35696511aceb. Sep 4 17:32:52.021331 containerd[1692]: time="2024-09-04T17:32:52.021282902Z" level=info msg="StartContainer for \"f905048a252af60a86ede11b544f75fb91b1c6060590b42e094b35696511aceb\" returns successfully" Sep 4 17:32:53.041192 kubelet[3282]: I0904 17:32:53.040590 3282 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7c9db4776-46qgr" podStartSLOduration=31.292073527 podStartE2EDuration="35.040522967s" podCreationTimestamp="2024-09-04 17:32:18 +0000 UTC" firstStartedPulling="2024-09-04 17:32:48.099317974 +0000 UTC m=+50.899316492" lastFinishedPulling="2024-09-04 17:32:51.847767314 +0000 UTC m=+54.647765932" observedRunningTime="2024-09-04 17:32:53.039582652 +0000 UTC m=+55.839581270" watchObservedRunningTime="2024-09-04 17:32:53.040522967 +0000 UTC m=+55.840521485" Sep 4 17:32:53.285498 containerd[1692]: time="2024-09-04T17:32:53.284416027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:53.286905 containerd[1692]: time="2024-09-04T17:32:53.286848367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Sep 4 17:32:53.290319 containerd[1692]: time="2024-09-04T17:32:53.290249924Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:53.294684 containerd[1692]: time="2024-09-04T17:32:53.294551295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:53.296045 containerd[1692]: time="2024-09-04T17:32:53.295312208Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.446518977s" Sep 4 17:32:53.296045 containerd[1692]: time="2024-09-04T17:32:53.295350709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Sep 4 17:32:53.297891 containerd[1692]: time="2024-09-04T17:32:53.297753349Z" level=info msg="CreateContainer within sandbox \"0017a301c0789fe9d39834abb1ee3d9651eb6b519a9ade5dd8641082a04d14e2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 17:32:53.335964 containerd[1692]: time="2024-09-04T17:32:53.335800982Z" level=info msg="CreateContainer within sandbox \"0017a301c0789fe9d39834abb1ee3d9651eb6b519a9ade5dd8641082a04d14e2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"39627774d281cb7b3d193cef1fb050f650616de68e08e8be4ba7a521e3b4995a\"" Sep 4 17:32:53.339429 containerd[1692]: time="2024-09-04T17:32:53.337303007Z" level=info msg="StartContainer for \"39627774d281cb7b3d193cef1fb050f650616de68e08e8be4ba7a521e3b4995a\"" Sep 4 17:32:53.390354 systemd[1]: Started cri-containerd-39627774d281cb7b3d193cef1fb050f650616de68e08e8be4ba7a521e3b4995a.scope - libcontainer container 39627774d281cb7b3d193cef1fb050f650616de68e08e8be4ba7a521e3b4995a. Sep 4 17:32:53.450110 containerd[1692]: time="2024-09-04T17:32:53.450058384Z" level=info msg="StartContainer for \"39627774d281cb7b3d193cef1fb050f650616de68e08e8be4ba7a521e3b4995a\" returns successfully" Sep 4 17:32:53.452737 containerd[1692]: time="2024-09-04T17:32:53.452700328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 17:32:55.165521 containerd[1692]: time="2024-09-04T17:32:55.165446123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:55.167820 containerd[1692]: time="2024-09-04T17:32:55.167740960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Sep 4 17:32:55.171323 containerd[1692]: time="2024-09-04T17:32:55.171258017Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:55.177357 containerd[1692]: time="2024-09-04T17:32:55.177255615Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:55.178338 containerd[1692]: time="2024-09-04T17:32:55.178293432Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.725547703s" Sep 4 17:32:55.178444 containerd[1692]: time="2024-09-04T17:32:55.178340732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Sep 4 17:32:55.181021 containerd[1692]: time="2024-09-04T17:32:55.180987175Z" level=info msg="CreateContainer within sandbox \"0017a301c0789fe9d39834abb1ee3d9651eb6b519a9ade5dd8641082a04d14e2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 17:32:55.225213 containerd[1692]: time="2024-09-04T17:32:55.225116494Z" level=info msg="CreateContainer within sandbox \"0017a301c0789fe9d39834abb1ee3d9651eb6b519a9ade5dd8641082a04d14e2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"47f2041ce44ac447aefd0d056b9a4587a3d0eeaae24b24c6c81533e956968113\"" Sep 4 17:32:55.225777 containerd[1692]: time="2024-09-04T17:32:55.225743304Z" level=info msg="StartContainer for \"47f2041ce44ac447aefd0d056b9a4587a3d0eeaae24b24c6c81533e956968113\"" Sep 4 17:32:55.269335 systemd[1]: Started cri-containerd-47f2041ce44ac447aefd0d056b9a4587a3d0eeaae24b24c6c81533e956968113.scope - libcontainer container 47f2041ce44ac447aefd0d056b9a4587a3d0eeaae24b24c6c81533e956968113. Sep 4 17:32:55.309253 containerd[1692]: time="2024-09-04T17:32:55.309127861Z" level=info msg="StartContainer for \"47f2041ce44ac447aefd0d056b9a4587a3d0eeaae24b24c6c81533e956968113\" returns successfully" Sep 4 17:32:55.880594 kubelet[3282]: I0904 17:32:55.880551 3282 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 17:32:55.880594 kubelet[3282]: I0904 17:32:55.880592 3282 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 17:32:56.046233 kubelet[3282]: I0904 17:32:56.045923 3282 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-dw5hs" podStartSLOduration=30.985228041 podStartE2EDuration="38.045458345s" podCreationTimestamp="2024-09-04 17:32:18 +0000 UTC" firstStartedPulling="2024-09-04 17:32:48.118719738 +0000 UTC m=+50.918718356" lastFinishedPulling="2024-09-04 17:32:55.178950142 +0000 UTC m=+57.978948660" observedRunningTime="2024-09-04 17:32:56.045255042 +0000 UTC m=+58.845253660" watchObservedRunningTime="2024-09-04 17:32:56.045458345 +0000 UTC m=+58.845456963" Sep 4 17:32:57.798453 containerd[1692]: time="2024-09-04T17:32:57.798349274Z" level=info msg="StopPodSandbox for \"761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e\"" Sep 4 17:32:57.944090 containerd[1692]: 2024-09-04 17:32:57.898 [WARNING][5205] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--8jztd-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"de6c1c61-155f-48e7-a53f-5ba66c45e5f2", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-27f7f2cbdf", ContainerID:"f012e9e381dbc8a6e62d41f3c5478d0871c8e43bd78632990c575d24a5071642", Pod:"coredns-76f75df574-8jztd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid1ec02d6813", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:32:57.944090 containerd[1692]: 2024-09-04 17:32:57.899 [INFO][5205] k8s.go 608: Cleaning up netns ContainerID="761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" Sep 4 17:32:57.944090 containerd[1692]: 2024-09-04 17:32:57.900 [INFO][5205] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" iface="eth0" netns="" Sep 4 17:32:57.944090 containerd[1692]: 2024-09-04 17:32:57.900 [INFO][5205] k8s.go 615: Releasing IP address(es) ContainerID="761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" Sep 4 17:32:57.944090 containerd[1692]: 2024-09-04 17:32:57.900 [INFO][5205] utils.go 188: Calico CNI releasing IP address ContainerID="761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" Sep 4 17:32:57.944090 containerd[1692]: 2024-09-04 17:32:57.932 [INFO][5211] ipam_plugin.go 417: Releasing address using handleID ContainerID="761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" HandleID="k8s-pod-network.761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--8jztd-eth0" Sep 4 17:32:57.944090 containerd[1692]: 2024-09-04 17:32:57.933 [INFO][5211] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:32:57.944090 containerd[1692]: 2024-09-04 17:32:57.933 [INFO][5211] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:32:57.944090 containerd[1692]: 2024-09-04 17:32:57.938 [WARNING][5211] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" HandleID="k8s-pod-network.761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--8jztd-eth0" Sep 4 17:32:57.944090 containerd[1692]: 2024-09-04 17:32:57.938 [INFO][5211] ipam_plugin.go 445: Releasing address using workloadID ContainerID="761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" HandleID="k8s-pod-network.761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--8jztd-eth0" Sep 4 17:32:57.944090 containerd[1692]: 2024-09-04 17:32:57.941 [INFO][5211] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:32:57.944090 containerd[1692]: 2024-09-04 17:32:57.942 [INFO][5205] k8s.go 621: Teardown processing complete. ContainerID="761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" Sep 4 17:32:57.945242 containerd[1692]: time="2024-09-04T17:32:57.944204948Z" level=info msg="TearDown network for sandbox \"761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e\" successfully" Sep 4 17:32:57.945242 containerd[1692]: time="2024-09-04T17:32:57.944262149Z" level=info msg="StopPodSandbox for \"761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e\" returns successfully" Sep 4 17:32:57.945242 containerd[1692]: time="2024-09-04T17:32:57.944994961Z" level=info msg="RemovePodSandbox for \"761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e\"" Sep 4 17:32:57.945242 containerd[1692]: time="2024-09-04T17:32:57.945036061Z" level=info msg="Forcibly stopping sandbox \"761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e\"" Sep 4 17:32:58.048674 containerd[1692]: 2024-09-04 17:32:57.999 [WARNING][5229] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--8jztd-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"de6c1c61-155f-48e7-a53f-5ba66c45e5f2", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-27f7f2cbdf", ContainerID:"f012e9e381dbc8a6e62d41f3c5478d0871c8e43bd78632990c575d24a5071642", Pod:"coredns-76f75df574-8jztd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid1ec02d6813", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:32:58.048674 containerd[1692]: 2024-09-04 17:32:58.000 [INFO][5229] k8s.go 608: Cleaning up netns ContainerID="761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" Sep 4 17:32:58.048674 containerd[1692]: 2024-09-04 17:32:58.000 [INFO][5229] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" iface="eth0" netns="" Sep 4 17:32:58.048674 containerd[1692]: 2024-09-04 17:32:58.000 [INFO][5229] k8s.go 615: Releasing IP address(es) ContainerID="761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" Sep 4 17:32:58.048674 containerd[1692]: 2024-09-04 17:32:58.000 [INFO][5229] utils.go 188: Calico CNI releasing IP address ContainerID="761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" Sep 4 17:32:58.048674 containerd[1692]: 2024-09-04 17:32:58.030 [INFO][5235] ipam_plugin.go 417: Releasing address using handleID ContainerID="761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" HandleID="k8s-pod-network.761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--8jztd-eth0" Sep 4 17:32:58.048674 containerd[1692]: 2024-09-04 17:32:58.030 [INFO][5235] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:32:58.048674 containerd[1692]: 2024-09-04 17:32:58.030 [INFO][5235] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:32:58.048674 containerd[1692]: 2024-09-04 17:32:58.037 [WARNING][5235] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" HandleID="k8s-pod-network.761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--8jztd-eth0" Sep 4 17:32:58.048674 containerd[1692]: 2024-09-04 17:32:58.038 [INFO][5235] ipam_plugin.go 445: Releasing address using workloadID ContainerID="761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" HandleID="k8s-pod-network.761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--8jztd-eth0" Sep 4 17:32:58.048674 containerd[1692]: 2024-09-04 17:32:58.043 [INFO][5235] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:32:58.048674 containerd[1692]: 2024-09-04 17:32:58.046 [INFO][5229] k8s.go 621: Teardown processing complete. ContainerID="761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e" Sep 4 17:32:58.051593 containerd[1692]: time="2024-09-04T17:32:58.049229357Z" level=info msg="TearDown network for sandbox \"761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e\" successfully" Sep 4 17:32:58.063558 containerd[1692]: time="2024-09-04T17:32:58.063320287Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:32:58.063558 containerd[1692]: time="2024-09-04T17:32:58.063456789Z" level=info msg="RemovePodSandbox \"761076956381be401d26636c3ef48091c8861f642a31e439b7d2dade254d132e\" returns successfully" Sep 4 17:32:58.065416 containerd[1692]: time="2024-09-04T17:32:58.065380620Z" level=info msg="StopPodSandbox for \"40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73\"" Sep 4 17:32:58.254845 containerd[1692]: 2024-09-04 17:32:58.146 [WARNING][5254] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--dshhv-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"39aed6ce-ad34-481b-9f5d-53f97e2a213b", ResourceVersion:"731", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-27f7f2cbdf", ContainerID:"02b74497b9a57912e7fc3c671ce691ee7cb3f83dafce25e72f8d94fd6de6537e", Pod:"coredns-76f75df574-dshhv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali70c26411fbb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:32:58.254845 containerd[1692]: 2024-09-04 17:32:58.146 [INFO][5254] k8s.go 608: Cleaning up netns ContainerID="40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" Sep 4 17:32:58.254845 containerd[1692]: 2024-09-04 17:32:58.146 [INFO][5254] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" iface="eth0" netns="" Sep 4 17:32:58.254845 containerd[1692]: 2024-09-04 17:32:58.146 [INFO][5254] k8s.go 615: Releasing IP address(es) ContainerID="40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" Sep 4 17:32:58.254845 containerd[1692]: 2024-09-04 17:32:58.146 [INFO][5254] utils.go 188: Calico CNI releasing IP address ContainerID="40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" Sep 4 17:32:58.254845 containerd[1692]: 2024-09-04 17:32:58.223 [INFO][5260] ipam_plugin.go 417: Releasing address using handleID ContainerID="40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" HandleID="k8s-pod-network.40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--dshhv-eth0" Sep 4 17:32:58.254845 containerd[1692]: 2024-09-04 17:32:58.224 [INFO][5260] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:32:58.254845 containerd[1692]: 2024-09-04 17:32:58.224 [INFO][5260] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:32:58.254845 containerd[1692]: 2024-09-04 17:32:58.242 [WARNING][5260] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" HandleID="k8s-pod-network.40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--dshhv-eth0" Sep 4 17:32:58.254845 containerd[1692]: 2024-09-04 17:32:58.242 [INFO][5260] ipam_plugin.go 445: Releasing address using workloadID ContainerID="40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" HandleID="k8s-pod-network.40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--dshhv-eth0" Sep 4 17:32:58.254845 containerd[1692]: 2024-09-04 17:32:58.249 [INFO][5260] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:32:58.254845 containerd[1692]: 2024-09-04 17:32:58.252 [INFO][5254] k8s.go 621: Teardown processing complete. ContainerID="40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" Sep 4 17:32:58.255583 containerd[1692]: time="2024-09-04T17:32:58.254921005Z" level=info msg="TearDown network for sandbox \"40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73\" successfully" Sep 4 17:32:58.255583 containerd[1692]: time="2024-09-04T17:32:58.254974706Z" level=info msg="StopPodSandbox for \"40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73\" returns successfully" Sep 4 17:32:58.255668 containerd[1692]: time="2024-09-04T17:32:58.255629916Z" level=info msg="RemovePodSandbox for \"40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73\"" Sep 4 17:32:58.255708 containerd[1692]: time="2024-09-04T17:32:58.255665017Z" level=info msg="Forcibly stopping sandbox \"40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73\"" Sep 4 17:32:58.294354 kubelet[3282]: I0904 17:32:58.293383 3282 topology_manager.go:215] "Topology Admit Handler" podUID="bbb73157-61dd-487f-96af-418fd7273e0c" podNamespace="calico-apiserver" podName="calico-apiserver-7768b88989-ph64r" Sep 4 17:32:58.303521 kubelet[3282]: W0904 17:32:58.303408 3282 reflector.go:539] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3975.2.1-a-27f7f2cbdf" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3975.2.1-a-27f7f2cbdf' and this object Sep 4 17:32:58.303521 kubelet[3282]: E0904 17:32:58.303456 3282 reflector.go:147] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3975.2.1-a-27f7f2cbdf" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3975.2.1-a-27f7f2cbdf' and this object Sep 4 17:32:58.303706 kubelet[3282]: W0904 17:32:58.303571 3282 reflector.go:539] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-3975.2.1-a-27f7f2cbdf" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3975.2.1-a-27f7f2cbdf' and this object Sep 4 17:32:58.303706 kubelet[3282]: E0904 17:32:58.303592 3282 reflector.go:147] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-3975.2.1-a-27f7f2cbdf" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3975.2.1-a-27f7f2cbdf' and this object Sep 4 17:32:58.311056 kubelet[3282]: I0904 17:32:58.311022 3282 topology_manager.go:215] "Topology Admit Handler" podUID="ce540a01-f297-4e8f-afb5-7e5f6287b709" podNamespace="calico-apiserver" podName="calico-apiserver-7768b88989-crtrh" Sep 4 17:32:58.314573 systemd[1]: Created slice kubepods-besteffort-podbbb73157_61dd_487f_96af_418fd7273e0c.slice - libcontainer container kubepods-besteffort-podbbb73157_61dd_487f_96af_418fd7273e0c.slice. Sep 4 17:32:58.335604 systemd[1]: Created slice kubepods-besteffort-podce540a01_f297_4e8f_afb5_7e5f6287b709.slice - libcontainer container kubepods-besteffort-podce540a01_f297_4e8f_afb5_7e5f6287b709.slice. Sep 4 17:32:58.391987 kubelet[3282]: I0904 17:32:58.391920 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ce540a01-f297-4e8f-afb5-7e5f6287b709-calico-apiserver-certs\") pod \"calico-apiserver-7768b88989-crtrh\" (UID: \"ce540a01-f297-4e8f-afb5-7e5f6287b709\") " pod="calico-apiserver/calico-apiserver-7768b88989-crtrh" Sep 4 17:32:58.393065 kubelet[3282]: I0904 17:32:58.392731 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54jzt\" (UniqueName: \"kubernetes.io/projected/ce540a01-f297-4e8f-afb5-7e5f6287b709-kube-api-access-54jzt\") pod \"calico-apiserver-7768b88989-crtrh\" (UID: \"ce540a01-f297-4e8f-afb5-7e5f6287b709\") " pod="calico-apiserver/calico-apiserver-7768b88989-crtrh" Sep 4 17:32:58.393688 kubelet[3282]: I0904 17:32:58.393399 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bbb73157-61dd-487f-96af-418fd7273e0c-calico-apiserver-certs\") pod \"calico-apiserver-7768b88989-ph64r\" (UID: \"bbb73157-61dd-487f-96af-418fd7273e0c\") " pod="calico-apiserver/calico-apiserver-7768b88989-ph64r" Sep 4 17:32:58.394736 kubelet[3282]: I0904 17:32:58.394611 3282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwxvh\" (UniqueName: \"kubernetes.io/projected/bbb73157-61dd-487f-96af-418fd7273e0c-kube-api-access-wwxvh\") pod \"calico-apiserver-7768b88989-ph64r\" (UID: \"bbb73157-61dd-487f-96af-418fd7273e0c\") " pod="calico-apiserver/calico-apiserver-7768b88989-ph64r" Sep 4 17:32:58.422074 containerd[1692]: 2024-09-04 17:32:58.367 [WARNING][5284] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--dshhv-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"39aed6ce-ad34-481b-9f5d-53f97e2a213b", ResourceVersion:"731", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-27f7f2cbdf", ContainerID:"02b74497b9a57912e7fc3c671ce691ee7cb3f83dafce25e72f8d94fd6de6537e", Pod:"coredns-76f75df574-dshhv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali70c26411fbb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:32:58.422074 containerd[1692]: 2024-09-04 17:32:58.367 [INFO][5284] k8s.go 608: Cleaning up netns ContainerID="40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" Sep 4 17:32:58.422074 containerd[1692]: 2024-09-04 17:32:58.367 [INFO][5284] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" iface="eth0" netns="" Sep 4 17:32:58.422074 containerd[1692]: 2024-09-04 17:32:58.367 [INFO][5284] k8s.go 615: Releasing IP address(es) ContainerID="40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" Sep 4 17:32:58.422074 containerd[1692]: 2024-09-04 17:32:58.367 [INFO][5284] utils.go 188: Calico CNI releasing IP address ContainerID="40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" Sep 4 17:32:58.422074 containerd[1692]: 2024-09-04 17:32:58.411 [INFO][5293] ipam_plugin.go 417: Releasing address using handleID ContainerID="40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" HandleID="k8s-pod-network.40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--dshhv-eth0" Sep 4 17:32:58.422074 containerd[1692]: 2024-09-04 17:32:58.411 [INFO][5293] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:32:58.422074 containerd[1692]: 2024-09-04 17:32:58.411 [INFO][5293] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:32:58.422074 containerd[1692]: 2024-09-04 17:32:58.417 [WARNING][5293] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" HandleID="k8s-pod-network.40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--dshhv-eth0" Sep 4 17:32:58.422074 containerd[1692]: 2024-09-04 17:32:58.417 [INFO][5293] ipam_plugin.go 445: Releasing address using workloadID ContainerID="40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" HandleID="k8s-pod-network.40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-coredns--76f75df574--dshhv-eth0" Sep 4 17:32:58.422074 containerd[1692]: 2024-09-04 17:32:58.418 [INFO][5293] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:32:58.422074 containerd[1692]: 2024-09-04 17:32:58.420 [INFO][5284] k8s.go 621: Teardown processing complete. ContainerID="40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73" Sep 4 17:32:58.422074 containerd[1692]: time="2024-09-04T17:32:58.421944423Z" level=info msg="TearDown network for sandbox \"40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73\" successfully" Sep 4 17:32:58.430239 containerd[1692]: time="2024-09-04T17:32:58.430011255Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:32:58.430239 containerd[1692]: time="2024-09-04T17:32:58.430106156Z" level=info msg="RemovePodSandbox \"40ed03520d7b28b5641d47244c26fc9332e3313ea5078812f67cbeeb7df4cf73\" returns successfully" Sep 4 17:32:58.431528 containerd[1692]: time="2024-09-04T17:32:58.431000571Z" level=info msg="StopPodSandbox for \"08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef\"" Sep 4 17:32:58.530528 containerd[1692]: 2024-09-04 17:32:58.478 [WARNING][5313] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--27f7f2cbdf-k8s-csi--node--driver--dw5hs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"516fb432-35a0-42b1-a39f-352e51299738", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-27f7f2cbdf", ContainerID:"0017a301c0789fe9d39834abb1ee3d9651eb6b519a9ade5dd8641082a04d14e2", Pod:"csi-node-driver-dw5hs", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.25.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid4e97d99ddd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:32:58.530528 containerd[1692]: 2024-09-04 17:32:58.479 [INFO][5313] k8s.go 608: Cleaning up netns ContainerID="08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" Sep 4 17:32:58.530528 containerd[1692]: 2024-09-04 17:32:58.479 [INFO][5313] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" iface="eth0" netns="" Sep 4 17:32:58.530528 containerd[1692]: 2024-09-04 17:32:58.479 [INFO][5313] k8s.go 615: Releasing IP address(es) ContainerID="08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" Sep 4 17:32:58.530528 containerd[1692]: 2024-09-04 17:32:58.479 [INFO][5313] utils.go 188: Calico CNI releasing IP address ContainerID="08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" Sep 4 17:32:58.530528 containerd[1692]: 2024-09-04 17:32:58.515 [INFO][5320] ipam_plugin.go 417: Releasing address using handleID ContainerID="08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" HandleID="k8s-pod-network.08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-csi--node--driver--dw5hs-eth0" Sep 4 17:32:58.530528 containerd[1692]: 2024-09-04 17:32:58.515 [INFO][5320] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:32:58.530528 containerd[1692]: 2024-09-04 17:32:58.515 [INFO][5320] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:32:58.530528 containerd[1692]: 2024-09-04 17:32:58.522 [WARNING][5320] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" HandleID="k8s-pod-network.08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-csi--node--driver--dw5hs-eth0" Sep 4 17:32:58.530528 containerd[1692]: 2024-09-04 17:32:58.522 [INFO][5320] ipam_plugin.go 445: Releasing address using workloadID ContainerID="08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" HandleID="k8s-pod-network.08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-csi--node--driver--dw5hs-eth0" Sep 4 17:32:58.530528 containerd[1692]: 2024-09-04 17:32:58.524 [INFO][5320] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:32:58.530528 containerd[1692]: 2024-09-04 17:32:58.526 [INFO][5313] k8s.go 621: Teardown processing complete. ContainerID="08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" Sep 4 17:32:58.530528 containerd[1692]: time="2024-09-04T17:32:58.528878164Z" level=info msg="TearDown network for sandbox \"08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef\" successfully" Sep 4 17:32:58.530528 containerd[1692]: time="2024-09-04T17:32:58.529013666Z" level=info msg="StopPodSandbox for \"08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef\" returns successfully" Sep 4 17:32:58.530528 containerd[1692]: time="2024-09-04T17:32:58.529717877Z" level=info msg="RemovePodSandbox for \"08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef\"" Sep 4 17:32:58.530528 containerd[1692]: time="2024-09-04T17:32:58.529755978Z" level=info msg="Forcibly stopping sandbox \"08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef\"" Sep 4 17:32:58.634049 containerd[1692]: 2024-09-04 17:32:58.584 [WARNING][5338] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--27f7f2cbdf-k8s-csi--node--driver--dw5hs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"516fb432-35a0-42b1-a39f-352e51299738", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-27f7f2cbdf", ContainerID:"0017a301c0789fe9d39834abb1ee3d9651eb6b519a9ade5dd8641082a04d14e2", Pod:"csi-node-driver-dw5hs", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.25.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid4e97d99ddd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:32:58.634049 containerd[1692]: 2024-09-04 17:32:58.585 [INFO][5338] k8s.go 608: Cleaning up netns ContainerID="08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" Sep 4 17:32:58.634049 containerd[1692]: 2024-09-04 17:32:58.585 [INFO][5338] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" iface="eth0" netns="" Sep 4 17:32:58.634049 containerd[1692]: 2024-09-04 17:32:58.585 [INFO][5338] k8s.go 615: Releasing IP address(es) ContainerID="08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" Sep 4 17:32:58.634049 containerd[1692]: 2024-09-04 17:32:58.585 [INFO][5338] utils.go 188: Calico CNI releasing IP address ContainerID="08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" Sep 4 17:32:58.634049 containerd[1692]: 2024-09-04 17:32:58.618 [INFO][5345] ipam_plugin.go 417: Releasing address using handleID ContainerID="08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" HandleID="k8s-pod-network.08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-csi--node--driver--dw5hs-eth0" Sep 4 17:32:58.634049 containerd[1692]: 2024-09-04 17:32:58.619 [INFO][5345] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:32:58.634049 containerd[1692]: 2024-09-04 17:32:58.619 [INFO][5345] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:32:58.634049 containerd[1692]: 2024-09-04 17:32:58.627 [WARNING][5345] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" HandleID="k8s-pod-network.08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-csi--node--driver--dw5hs-eth0" Sep 4 17:32:58.634049 containerd[1692]: 2024-09-04 17:32:58.627 [INFO][5345] ipam_plugin.go 445: Releasing address using workloadID ContainerID="08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" HandleID="k8s-pod-network.08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-csi--node--driver--dw5hs-eth0" Sep 4 17:32:58.634049 containerd[1692]: 2024-09-04 17:32:58.628 [INFO][5345] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:32:58.634049 containerd[1692]: 2024-09-04 17:32:58.630 [INFO][5338] k8s.go 621: Teardown processing complete. ContainerID="08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef" Sep 4 17:32:58.634049 containerd[1692]: time="2024-09-04T17:32:58.631671537Z" level=info msg="TearDown network for sandbox \"08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef\" successfully" Sep 4 17:32:58.642490 containerd[1692]: time="2024-09-04T17:32:58.642284109Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:32:58.642490 containerd[1692]: time="2024-09-04T17:32:58.642379011Z" level=info msg="RemovePodSandbox \"08ecde12fbf64485ba35e0399a37e9e3209968d9c9d86e2749abb09db49ad7ef\" returns successfully" Sep 4 17:32:58.643775 containerd[1692]: time="2024-09-04T17:32:58.643335226Z" level=info msg="StopPodSandbox for \"79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef\"" Sep 4 17:32:58.745622 containerd[1692]: 2024-09-04 17:32:58.704 [WARNING][5363] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--27f7f2cbdf-k8s-calico--kube--controllers--7c9db4776--46qgr-eth0", GenerateName:"calico-kube-controllers-7c9db4776-", Namespace:"calico-system", SelfLink:"", UID:"87a16e16-c2e6-4905-8da8-5de334819488", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c9db4776", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-27f7f2cbdf", ContainerID:"5a9ad33850aa86a5b25c8d288895e423d450b84f734140e7562fa4385ed61cb1", Pod:"calico-kube-controllers-7c9db4776-46qgr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.25.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3b655f54aa5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:32:58.745622 containerd[1692]: 2024-09-04 17:32:58.705 [INFO][5363] k8s.go 608: Cleaning up netns ContainerID="79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" Sep 4 17:32:58.745622 containerd[1692]: 2024-09-04 17:32:58.705 [INFO][5363] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" iface="eth0" netns="" Sep 4 17:32:58.745622 containerd[1692]: 2024-09-04 17:32:58.705 [INFO][5363] k8s.go 615: Releasing IP address(es) ContainerID="79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" Sep 4 17:32:58.745622 containerd[1692]: 2024-09-04 17:32:58.705 [INFO][5363] utils.go 188: Calico CNI releasing IP address ContainerID="79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" Sep 4 17:32:58.745622 containerd[1692]: 2024-09-04 17:32:58.736 [INFO][5369] ipam_plugin.go 417: Releasing address using handleID ContainerID="79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" HandleID="k8s-pod-network.79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--kube--controllers--7c9db4776--46qgr-eth0" Sep 4 17:32:58.745622 containerd[1692]: 2024-09-04 17:32:58.737 [INFO][5369] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:32:58.745622 containerd[1692]: 2024-09-04 17:32:58.737 [INFO][5369] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:32:58.745622 containerd[1692]: 2024-09-04 17:32:58.742 [WARNING][5369] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" HandleID="k8s-pod-network.79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--kube--controllers--7c9db4776--46qgr-eth0" Sep 4 17:32:58.745622 containerd[1692]: 2024-09-04 17:32:58.742 [INFO][5369] ipam_plugin.go 445: Releasing address using workloadID ContainerID="79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" HandleID="k8s-pod-network.79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--kube--controllers--7c9db4776--46qgr-eth0" Sep 4 17:32:58.745622 containerd[1692]: 2024-09-04 17:32:58.743 [INFO][5369] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:32:58.745622 containerd[1692]: 2024-09-04 17:32:58.744 [INFO][5363] k8s.go 621: Teardown processing complete. ContainerID="79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" Sep 4 17:32:58.746738 containerd[1692]: time="2024-09-04T17:32:58.745667092Z" level=info msg="TearDown network for sandbox \"79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef\" successfully" Sep 4 17:32:58.746738 containerd[1692]: time="2024-09-04T17:32:58.745704293Z" level=info msg="StopPodSandbox for \"79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef\" returns successfully" Sep 4 17:32:58.746738 containerd[1692]: time="2024-09-04T17:32:58.746661508Z" level=info msg="RemovePodSandbox for \"79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef\"" Sep 4 17:32:58.746738 containerd[1692]: time="2024-09-04T17:32:58.746699609Z" level=info msg="Forcibly stopping sandbox \"79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef\"" Sep 4 17:32:58.807097 containerd[1692]: 2024-09-04 17:32:58.779 [WARNING][5388] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--27f7f2cbdf-k8s-calico--kube--controllers--7c9db4776--46qgr-eth0", GenerateName:"calico-kube-controllers-7c9db4776-", Namespace:"calico-system", SelfLink:"", UID:"87a16e16-c2e6-4905-8da8-5de334819488", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c9db4776", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-27f7f2cbdf", ContainerID:"5a9ad33850aa86a5b25c8d288895e423d450b84f734140e7562fa4385ed61cb1", Pod:"calico-kube-controllers-7c9db4776-46qgr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.25.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3b655f54aa5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:32:58.807097 containerd[1692]: 2024-09-04 17:32:58.779 [INFO][5388] k8s.go 608: Cleaning up netns ContainerID="79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" Sep 4 17:32:58.807097 containerd[1692]: 2024-09-04 17:32:58.779 [INFO][5388] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" iface="eth0" netns="" Sep 4 17:32:58.807097 containerd[1692]: 2024-09-04 17:32:58.779 [INFO][5388] k8s.go 615: Releasing IP address(es) ContainerID="79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" Sep 4 17:32:58.807097 containerd[1692]: 2024-09-04 17:32:58.779 [INFO][5388] utils.go 188: Calico CNI releasing IP address ContainerID="79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" Sep 4 17:32:58.807097 containerd[1692]: 2024-09-04 17:32:58.798 [INFO][5395] ipam_plugin.go 417: Releasing address using handleID ContainerID="79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" HandleID="k8s-pod-network.79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--kube--controllers--7c9db4776--46qgr-eth0" Sep 4 17:32:58.807097 containerd[1692]: 2024-09-04 17:32:58.799 [INFO][5395] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:32:58.807097 containerd[1692]: 2024-09-04 17:32:58.799 [INFO][5395] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:32:58.807097 containerd[1692]: 2024-09-04 17:32:58.803 [WARNING][5395] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" HandleID="k8s-pod-network.79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--kube--controllers--7c9db4776--46qgr-eth0" Sep 4 17:32:58.807097 containerd[1692]: 2024-09-04 17:32:58.803 [INFO][5395] ipam_plugin.go 445: Releasing address using workloadID ContainerID="79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" HandleID="k8s-pod-network.79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--kube--controllers--7c9db4776--46qgr-eth0" Sep 4 17:32:58.807097 containerd[1692]: 2024-09-04 17:32:58.805 [INFO][5395] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:32:58.807097 containerd[1692]: 2024-09-04 17:32:58.806 [INFO][5388] k8s.go 621: Teardown processing complete. ContainerID="79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef" Sep 4 17:32:58.808051 containerd[1692]: time="2024-09-04T17:32:58.807147993Z" level=info msg="TearDown network for sandbox \"79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef\" successfully" Sep 4 17:32:58.814390 containerd[1692]: time="2024-09-04T17:32:58.814331110Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:32:58.814528 containerd[1692]: time="2024-09-04T17:32:58.814405611Z" level=info msg="RemovePodSandbox \"79cdfbd7cbac24b3b14a2e636e4d080b1b74ae8ea37d71ee33c907abd68ca2ef\" returns successfully" Sep 4 17:32:59.502138 kubelet[3282]: E0904 17:32:59.502087 3282 projected.go:294] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 4 17:32:59.502138 kubelet[3282]: E0904 17:32:59.502137 3282 projected.go:200] Error preparing data for projected volume kube-api-access-54jzt for pod calico-apiserver/calico-apiserver-7768b88989-crtrh: failed to sync configmap cache: timed out waiting for the condition Sep 4 17:32:59.502901 kubelet[3282]: E0904 17:32:59.502290 3282 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ce540a01-f297-4e8f-afb5-7e5f6287b709-kube-api-access-54jzt podName:ce540a01-f297-4e8f-afb5-7e5f6287b709 nodeName:}" failed. No retries permitted until 2024-09-04 17:33:00.002257906 +0000 UTC m=+62.802256424 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-54jzt" (UniqueName: "kubernetes.io/projected/ce540a01-f297-4e8f-afb5-7e5f6287b709-kube-api-access-54jzt") pod "calico-apiserver-7768b88989-crtrh" (UID: "ce540a01-f297-4e8f-afb5-7e5f6287b709") : failed to sync configmap cache: timed out waiting for the condition Sep 4 17:32:59.505007 kubelet[3282]: E0904 17:32:59.504974 3282 projected.go:294] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 4 17:32:59.505007 kubelet[3282]: E0904 17:32:59.505013 3282 projected.go:200] Error preparing data for projected volume kube-api-access-wwxvh for pod calico-apiserver/calico-apiserver-7768b88989-ph64r: failed to sync configmap cache: timed out waiting for the condition Sep 4 17:32:59.505222 kubelet[3282]: E0904 17:32:59.505069 3282 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bbb73157-61dd-487f-96af-418fd7273e0c-kube-api-access-wwxvh podName:bbb73157-61dd-487f-96af-418fd7273e0c nodeName:}" failed. No retries permitted until 2024-09-04 17:33:00.005050751 +0000 UTC m=+62.805049369 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wwxvh" (UniqueName: "kubernetes.io/projected/bbb73157-61dd-487f-96af-418fd7273e0c-kube-api-access-wwxvh") pod "calico-apiserver-7768b88989-ph64r" (UID: "bbb73157-61dd-487f-96af-418fd7273e0c") : failed to sync configmap cache: timed out waiting for the condition Sep 4 17:33:00.125055 containerd[1692]: time="2024-09-04T17:33:00.124967041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7768b88989-ph64r,Uid:bbb73157-61dd-487f-96af-418fd7273e0c,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:33:00.144041 containerd[1692]: time="2024-09-04T17:33:00.143991850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7768b88989-crtrh,Uid:ce540a01-f297-4e8f-afb5-7e5f6287b709,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:33:00.297344 systemd-networkd[1340]: cali675a58d8a6e: Link UP Sep 4 17:33:00.298783 systemd-networkd[1340]: cali675a58d8a6e: Gained carrier Sep 4 17:33:00.333741 containerd[1692]: 2024-09-04 17:33:00.193 [INFO][5406] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--a--27f7f2cbdf-k8s-calico--apiserver--7768b88989--ph64r-eth0 calico-apiserver-7768b88989- calico-apiserver bbb73157-61dd-487f-96af-418fd7273e0c 829 0 2024-09-04 17:32:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7768b88989 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975.2.1-a-27f7f2cbdf calico-apiserver-7768b88989-ph64r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali675a58d8a6e [] []}} ContainerID="6c85bf1f7b0273bd34148524c38d5247e9a49529ce3ab589707382c095a3e518" Namespace="calico-apiserver" Pod="calico-apiserver-7768b88989-ph64r" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--apiserver--7768b88989--ph64r-" Sep 4 17:33:00.333741 containerd[1692]: 2024-09-04 17:33:00.193 [INFO][5406] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6c85bf1f7b0273bd34148524c38d5247e9a49529ce3ab589707382c095a3e518" Namespace="calico-apiserver" Pod="calico-apiserver-7768b88989-ph64r" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--apiserver--7768b88989--ph64r-eth0" Sep 4 17:33:00.333741 containerd[1692]: 2024-09-04 17:33:00.239 [INFO][5426] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c85bf1f7b0273bd34148524c38d5247e9a49529ce3ab589707382c095a3e518" HandleID="k8s-pod-network.6c85bf1f7b0273bd34148524c38d5247e9a49529ce3ab589707382c095a3e518" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--apiserver--7768b88989--ph64r-eth0" Sep 4 17:33:00.333741 containerd[1692]: 2024-09-04 17:33:00.251 [INFO][5426] ipam_plugin.go 270: Auto assigning IP ContainerID="6c85bf1f7b0273bd34148524c38d5247e9a49529ce3ab589707382c095a3e518" HandleID="k8s-pod-network.6c85bf1f7b0273bd34148524c38d5247e9a49529ce3ab589707382c095a3e518" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--apiserver--7768b88989--ph64r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318350), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975.2.1-a-27f7f2cbdf", "pod":"calico-apiserver-7768b88989-ph64r", "timestamp":"2024-09-04 17:33:00.239419803 +0000 UTC"}, Hostname:"ci-3975.2.1-a-27f7f2cbdf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:33:00.333741 containerd[1692]: 2024-09-04 17:33:00.252 [INFO][5426] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:00.333741 containerd[1692]: 2024-09-04 17:33:00.252 [INFO][5426] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:00.333741 containerd[1692]: 2024-09-04 17:33:00.252 [INFO][5426] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-a-27f7f2cbdf' Sep 4 17:33:00.333741 containerd[1692]: 2024-09-04 17:33:00.254 [INFO][5426] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6c85bf1f7b0273bd34148524c38d5247e9a49529ce3ab589707382c095a3e518" host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:33:00.333741 containerd[1692]: 2024-09-04 17:33:00.259 [INFO][5426] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:33:00.333741 containerd[1692]: 2024-09-04 17:33:00.268 [INFO][5426] ipam.go 489: Trying affinity for 192.168.25.64/26 host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:33:00.333741 containerd[1692]: 2024-09-04 17:33:00.270 [INFO][5426] ipam.go 155: Attempting to load block cidr=192.168.25.64/26 host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:33:00.333741 containerd[1692]: 2024-09-04 17:33:00.273 [INFO][5426] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.25.64/26 host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:33:00.333741 containerd[1692]: 2024-09-04 17:33:00.273 [INFO][5426] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.25.64/26 handle="k8s-pod-network.6c85bf1f7b0273bd34148524c38d5247e9a49529ce3ab589707382c095a3e518" host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:33:00.333741 containerd[1692]: 2024-09-04 17:33:00.274 [INFO][5426] ipam.go 1685: Creating new handle: k8s-pod-network.6c85bf1f7b0273bd34148524c38d5247e9a49529ce3ab589707382c095a3e518 Sep 4 17:33:00.333741 containerd[1692]: 2024-09-04 17:33:00.279 [INFO][5426] ipam.go 1203: Writing block in order to claim IPs block=192.168.25.64/26 handle="k8s-pod-network.6c85bf1f7b0273bd34148524c38d5247e9a49529ce3ab589707382c095a3e518" host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:33:00.333741 containerd[1692]: 2024-09-04 17:33:00.286 [INFO][5426] ipam.go 1216: Successfully claimed IPs: [192.168.25.69/26] block=192.168.25.64/26 handle="k8s-pod-network.6c85bf1f7b0273bd34148524c38d5247e9a49529ce3ab589707382c095a3e518" host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:33:00.333741 containerd[1692]: 2024-09-04 17:33:00.287 [INFO][5426] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.25.69/26] handle="k8s-pod-network.6c85bf1f7b0273bd34148524c38d5247e9a49529ce3ab589707382c095a3e518" host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:33:00.333741 containerd[1692]: 2024-09-04 17:33:00.287 [INFO][5426] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:00.333741 containerd[1692]: 2024-09-04 17:33:00.287 [INFO][5426] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.25.69/26] IPv6=[] ContainerID="6c85bf1f7b0273bd34148524c38d5247e9a49529ce3ab589707382c095a3e518" HandleID="k8s-pod-network.6c85bf1f7b0273bd34148524c38d5247e9a49529ce3ab589707382c095a3e518" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--apiserver--7768b88989--ph64r-eth0" Sep 4 17:33:00.335394 containerd[1692]: 2024-09-04 17:33:00.291 [INFO][5406] k8s.go 386: Populated endpoint ContainerID="6c85bf1f7b0273bd34148524c38d5247e9a49529ce3ab589707382c095a3e518" Namespace="calico-apiserver" Pod="calico-apiserver-7768b88989-ph64r" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--apiserver--7768b88989--ph64r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--27f7f2cbdf-k8s-calico--apiserver--7768b88989--ph64r-eth0", GenerateName:"calico-apiserver-7768b88989-", Namespace:"calico-apiserver", SelfLink:"", UID:"bbb73157-61dd-487f-96af-418fd7273e0c", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7768b88989", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-27f7f2cbdf", ContainerID:"", Pod:"calico-apiserver-7768b88989-ph64r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali675a58d8a6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:00.335394 containerd[1692]: 2024-09-04 17:33:00.291 [INFO][5406] k8s.go 387: Calico CNI using IPs: [192.168.25.69/32] ContainerID="6c85bf1f7b0273bd34148524c38d5247e9a49529ce3ab589707382c095a3e518" Namespace="calico-apiserver" Pod="calico-apiserver-7768b88989-ph64r" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--apiserver--7768b88989--ph64r-eth0" Sep 4 17:33:00.335394 containerd[1692]: 2024-09-04 17:33:00.291 [INFO][5406] dataplane_linux.go 68: Setting the host side veth name to cali675a58d8a6e ContainerID="6c85bf1f7b0273bd34148524c38d5247e9a49529ce3ab589707382c095a3e518" Namespace="calico-apiserver" Pod="calico-apiserver-7768b88989-ph64r" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--apiserver--7768b88989--ph64r-eth0" Sep 4 17:33:00.335394 containerd[1692]: 2024-09-04 17:33:00.299 [INFO][5406] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="6c85bf1f7b0273bd34148524c38d5247e9a49529ce3ab589707382c095a3e518" Namespace="calico-apiserver" Pod="calico-apiserver-7768b88989-ph64r" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--apiserver--7768b88989--ph64r-eth0" Sep 4 17:33:00.335394 containerd[1692]: 2024-09-04 17:33:00.303 [INFO][5406] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6c85bf1f7b0273bd34148524c38d5247e9a49529ce3ab589707382c095a3e518" Namespace="calico-apiserver" Pod="calico-apiserver-7768b88989-ph64r" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--apiserver--7768b88989--ph64r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--27f7f2cbdf-k8s-calico--apiserver--7768b88989--ph64r-eth0", GenerateName:"calico-apiserver-7768b88989-", Namespace:"calico-apiserver", SelfLink:"", UID:"bbb73157-61dd-487f-96af-418fd7273e0c", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7768b88989", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-27f7f2cbdf", ContainerID:"6c85bf1f7b0273bd34148524c38d5247e9a49529ce3ab589707382c095a3e518", Pod:"calico-apiserver-7768b88989-ph64r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali675a58d8a6e", MAC:"7a:df:d0:76:db:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:00.335394 containerd[1692]: 2024-09-04 17:33:00.328 [INFO][5406] k8s.go 500: Wrote updated endpoint to datastore ContainerID="6c85bf1f7b0273bd34148524c38d5247e9a49529ce3ab589707382c095a3e518" Namespace="calico-apiserver" Pod="calico-apiserver-7768b88989-ph64r" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--apiserver--7768b88989--ph64r-eth0" Sep 4 17:33:00.368378 systemd-networkd[1340]: cali8f38dcd684d: Link UP Sep 4 17:33:00.370242 systemd-networkd[1340]: cali8f38dcd684d: Gained carrier Sep 4 17:33:00.396731 containerd[1692]: time="2024-09-04T17:33:00.396352558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:33:00.396731 containerd[1692]: time="2024-09-04T17:33:00.396422559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:00.396731 containerd[1692]: time="2024-09-04T17:33:00.396445759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:33:00.396731 containerd[1692]: time="2024-09-04T17:33:00.396462359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:00.409694 containerd[1692]: 2024-09-04 17:33:00.235 [INFO][5421] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--a--27f7f2cbdf-k8s-calico--apiserver--7768b88989--crtrh-eth0 calico-apiserver-7768b88989- calico-apiserver ce540a01-f297-4e8f-afb5-7e5f6287b709 831 0 2024-09-04 17:32:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7768b88989 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975.2.1-a-27f7f2cbdf calico-apiserver-7768b88989-crtrh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8f38dcd684d [] []}} ContainerID="0dbb52686bd9d0e38fdb68c2a4a8e5890aad768064c19032113a3e7be500ea1c" Namespace="calico-apiserver" Pod="calico-apiserver-7768b88989-crtrh" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--apiserver--7768b88989--crtrh-" Sep 4 17:33:00.409694 containerd[1692]: 2024-09-04 17:33:00.235 [INFO][5421] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0dbb52686bd9d0e38fdb68c2a4a8e5890aad768064c19032113a3e7be500ea1c" Namespace="calico-apiserver" Pod="calico-apiserver-7768b88989-crtrh" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--apiserver--7768b88989--crtrh-eth0" Sep 4 17:33:00.409694 containerd[1692]: 2024-09-04 17:33:00.279 [INFO][5434] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0dbb52686bd9d0e38fdb68c2a4a8e5890aad768064c19032113a3e7be500ea1c" HandleID="k8s-pod-network.0dbb52686bd9d0e38fdb68c2a4a8e5890aad768064c19032113a3e7be500ea1c" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--apiserver--7768b88989--crtrh-eth0" Sep 4 17:33:00.409694 containerd[1692]: 2024-09-04 17:33:00.295 [INFO][5434] ipam_plugin.go 270: Auto assigning IP ContainerID="0dbb52686bd9d0e38fdb68c2a4a8e5890aad768064c19032113a3e7be500ea1c" HandleID="k8s-pod-network.0dbb52686bd9d0e38fdb68c2a4a8e5890aad768064c19032113a3e7be500ea1c" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--apiserver--7768b88989--crtrh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000117df0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975.2.1-a-27f7f2cbdf", "pod":"calico-apiserver-7768b88989-crtrh", "timestamp":"2024-09-04 17:33:00.27914905 +0000 UTC"}, Hostname:"ci-3975.2.1-a-27f7f2cbdf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:33:00.409694 containerd[1692]: 2024-09-04 17:33:00.295 [INFO][5434] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:00.409694 containerd[1692]: 2024-09-04 17:33:00.295 [INFO][5434] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:00.409694 containerd[1692]: 2024-09-04 17:33:00.295 [INFO][5434] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-a-27f7f2cbdf' Sep 4 17:33:00.409694 containerd[1692]: 2024-09-04 17:33:00.298 [INFO][5434] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0dbb52686bd9d0e38fdb68c2a4a8e5890aad768064c19032113a3e7be500ea1c" host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:33:00.409694 containerd[1692]: 2024-09-04 17:33:00.308 [INFO][5434] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:33:00.409694 containerd[1692]: 2024-09-04 17:33:00.319 [INFO][5434] ipam.go 489: Trying affinity for 192.168.25.64/26 host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:33:00.409694 containerd[1692]: 2024-09-04 17:33:00.322 [INFO][5434] ipam.go 155: Attempting to load block cidr=192.168.25.64/26 host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:33:00.409694 containerd[1692]: 2024-09-04 17:33:00.329 [INFO][5434] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.25.64/26 host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:33:00.409694 containerd[1692]: 2024-09-04 17:33:00.329 [INFO][5434] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.25.64/26 handle="k8s-pod-network.0dbb52686bd9d0e38fdb68c2a4a8e5890aad768064c19032113a3e7be500ea1c" host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:33:00.409694 containerd[1692]: 2024-09-04 17:33:00.334 [INFO][5434] ipam.go 1685: Creating new handle: k8s-pod-network.0dbb52686bd9d0e38fdb68c2a4a8e5890aad768064c19032113a3e7be500ea1c Sep 4 17:33:00.409694 containerd[1692]: 2024-09-04 17:33:00.341 [INFO][5434] ipam.go 1203: Writing block in order to claim IPs block=192.168.25.64/26 handle="k8s-pod-network.0dbb52686bd9d0e38fdb68c2a4a8e5890aad768064c19032113a3e7be500ea1c" host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:33:00.409694 containerd[1692]: 2024-09-04 17:33:00.350 [INFO][5434] ipam.go 1216: Successfully claimed IPs: [192.168.25.70/26] block=192.168.25.64/26 handle="k8s-pod-network.0dbb52686bd9d0e38fdb68c2a4a8e5890aad768064c19032113a3e7be500ea1c" host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:33:00.409694 containerd[1692]: 2024-09-04 17:33:00.351 [INFO][5434] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.25.70/26] handle="k8s-pod-network.0dbb52686bd9d0e38fdb68c2a4a8e5890aad768064c19032113a3e7be500ea1c" host="ci-3975.2.1-a-27f7f2cbdf" Sep 4 17:33:00.409694 containerd[1692]: 2024-09-04 17:33:00.351 [INFO][5434] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:00.409694 containerd[1692]: 2024-09-04 17:33:00.351 [INFO][5434] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.25.70/26] IPv6=[] ContainerID="0dbb52686bd9d0e38fdb68c2a4a8e5890aad768064c19032113a3e7be500ea1c" HandleID="k8s-pod-network.0dbb52686bd9d0e38fdb68c2a4a8e5890aad768064c19032113a3e7be500ea1c" Workload="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--apiserver--7768b88989--crtrh-eth0" Sep 4 17:33:00.410660 containerd[1692]: 2024-09-04 17:33:00.358 [INFO][5421] k8s.go 386: Populated endpoint ContainerID="0dbb52686bd9d0e38fdb68c2a4a8e5890aad768064c19032113a3e7be500ea1c" Namespace="calico-apiserver" Pod="calico-apiserver-7768b88989-crtrh" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--apiserver--7768b88989--crtrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--27f7f2cbdf-k8s-calico--apiserver--7768b88989--crtrh-eth0", GenerateName:"calico-apiserver-7768b88989-", Namespace:"calico-apiserver", SelfLink:"", UID:"ce540a01-f297-4e8f-afb5-7e5f6287b709", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7768b88989", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-27f7f2cbdf", ContainerID:"", Pod:"calico-apiserver-7768b88989-crtrh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8f38dcd684d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:00.410660 containerd[1692]: 2024-09-04 17:33:00.358 [INFO][5421] k8s.go 387: Calico CNI using IPs: [192.168.25.70/32] ContainerID="0dbb52686bd9d0e38fdb68c2a4a8e5890aad768064c19032113a3e7be500ea1c" Namespace="calico-apiserver" Pod="calico-apiserver-7768b88989-crtrh" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--apiserver--7768b88989--crtrh-eth0" Sep 4 17:33:00.410660 containerd[1692]: 2024-09-04 17:33:00.358 [INFO][5421] dataplane_linux.go 68: Setting the host side veth name to cali8f38dcd684d ContainerID="0dbb52686bd9d0e38fdb68c2a4a8e5890aad768064c19032113a3e7be500ea1c" Namespace="calico-apiserver" Pod="calico-apiserver-7768b88989-crtrh" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--apiserver--7768b88989--crtrh-eth0" Sep 4 17:33:00.410660 containerd[1692]: 2024-09-04 17:33:00.362 [INFO][5421] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0dbb52686bd9d0e38fdb68c2a4a8e5890aad768064c19032113a3e7be500ea1c" Namespace="calico-apiserver" Pod="calico-apiserver-7768b88989-crtrh" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--apiserver--7768b88989--crtrh-eth0" Sep 4 17:33:00.410660 containerd[1692]: 2024-09-04 17:33:00.383 [INFO][5421] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0dbb52686bd9d0e38fdb68c2a4a8e5890aad768064c19032113a3e7be500ea1c" Namespace="calico-apiserver" Pod="calico-apiserver-7768b88989-crtrh" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--apiserver--7768b88989--crtrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--27f7f2cbdf-k8s-calico--apiserver--7768b88989--crtrh-eth0", GenerateName:"calico-apiserver-7768b88989-", Namespace:"calico-apiserver", SelfLink:"", UID:"ce540a01-f297-4e8f-afb5-7e5f6287b709", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 32, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7768b88989", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-27f7f2cbdf", ContainerID:"0dbb52686bd9d0e38fdb68c2a4a8e5890aad768064c19032113a3e7be500ea1c", Pod:"calico-apiserver-7768b88989-crtrh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8f38dcd684d", MAC:"26:e8:bf:28:90:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:00.410660 containerd[1692]: 2024-09-04 17:33:00.402 [INFO][5421] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0dbb52686bd9d0e38fdb68c2a4a8e5890aad768064c19032113a3e7be500ea1c" Namespace="calico-apiserver" Pod="calico-apiserver-7768b88989-crtrh" WorkloadEndpoint="ci--3975.2.1--a--27f7f2cbdf-k8s-calico--apiserver--7768b88989--crtrh-eth0" Sep 4 17:33:00.453328 systemd[1]: Started cri-containerd-6c85bf1f7b0273bd34148524c38d5247e9a49529ce3ab589707382c095a3e518.scope - libcontainer container 6c85bf1f7b0273bd34148524c38d5247e9a49529ce3ab589707382c095a3e518. Sep 4 17:33:00.490353 containerd[1692]: time="2024-09-04T17:33:00.490093083Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:33:00.490615 containerd[1692]: time="2024-09-04T17:33:00.490309087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:00.490710 containerd[1692]: time="2024-09-04T17:33:00.490554291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:33:00.490948 containerd[1692]: time="2024-09-04T17:33:00.490769094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:00.532814 systemd[1]: Started cri-containerd-0dbb52686bd9d0e38fdb68c2a4a8e5890aad768064c19032113a3e7be500ea1c.scope - libcontainer container 0dbb52686bd9d0e38fdb68c2a4a8e5890aad768064c19032113a3e7be500ea1c. Sep 4 17:33:00.559579 containerd[1692]: time="2024-09-04T17:33:00.559117107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7768b88989-ph64r,Uid:bbb73157-61dd-487f-96af-418fd7273e0c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6c85bf1f7b0273bd34148524c38d5247e9a49529ce3ab589707382c095a3e518\"" Sep 4 17:33:00.562364 containerd[1692]: time="2024-09-04T17:33:00.561743049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:33:00.598144 containerd[1692]: time="2024-09-04T17:33:00.598061240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7768b88989-crtrh,Uid:ce540a01-f297-4e8f-afb5-7e5f6287b709,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0dbb52686bd9d0e38fdb68c2a4a8e5890aad768064c19032113a3e7be500ea1c\"" Sep 4 17:33:01.413390 systemd[1]: run-containerd-runc-k8s.io-f905048a252af60a86ede11b544f75fb91b1c6060590b42e094b35696511aceb-runc.jd5ee3.mount: Deactivated successfully. Sep 4 17:33:01.686433 systemd-networkd[1340]: cali675a58d8a6e: Gained IPv6LL Sep 4 17:33:02.390831 systemd-networkd[1340]: cali8f38dcd684d: Gained IPv6LL Sep 4 17:33:03.499034 containerd[1692]: time="2024-09-04T17:33:03.498977355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:03.501973 containerd[1692]: time="2024-09-04T17:33:03.501903397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Sep 4 17:33:03.505611 containerd[1692]: time="2024-09-04T17:33:03.505527148Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:03.511739 containerd[1692]: time="2024-09-04T17:33:03.511662635Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:03.512478 containerd[1692]: time="2024-09-04T17:33:03.512292844Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 2.950506994s" Sep 4 17:33:03.512478 containerd[1692]: time="2024-09-04T17:33:03.512336945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 4 17:33:03.513419 containerd[1692]: time="2024-09-04T17:33:03.513197957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:33:03.514852 containerd[1692]: time="2024-09-04T17:33:03.514643177Z" level=info msg="CreateContainer within sandbox \"6c85bf1f7b0273bd34148524c38d5247e9a49529ce3ab589707382c095a3e518\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:33:03.549352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3399218940.mount: Deactivated successfully. Sep 4 17:33:03.563171 containerd[1692]: time="2024-09-04T17:33:03.563109664Z" level=info msg="CreateContainer within sandbox \"6c85bf1f7b0273bd34148524c38d5247e9a49529ce3ab589707382c095a3e518\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fc2f2868c112bb93c6a89f0dfa99151473d16343957ace46bea36fd195533b74\"" Sep 4 17:33:03.563761 containerd[1692]: time="2024-09-04T17:33:03.563723773Z" level=info msg="StartContainer for \"fc2f2868c112bb93c6a89f0dfa99151473d16343957ace46bea36fd195533b74\"" Sep 4 17:33:03.603045 systemd[1]: run-containerd-runc-k8s.io-fc2f2868c112bb93c6a89f0dfa99151473d16343957ace46bea36fd195533b74-runc.1ayBHP.mount: Deactivated successfully. Sep 4 17:33:03.611321 systemd[1]: Started cri-containerd-fc2f2868c112bb93c6a89f0dfa99151473d16343957ace46bea36fd195533b74.scope - libcontainer container fc2f2868c112bb93c6a89f0dfa99151473d16343957ace46bea36fd195533b74. Sep 4 17:33:03.658181 containerd[1692]: time="2024-09-04T17:33:03.658027410Z" level=info msg="StartContainer for \"fc2f2868c112bb93c6a89f0dfa99151473d16343957ace46bea36fd195533b74\" returns successfully" Sep 4 17:33:03.844632 containerd[1692]: time="2024-09-04T17:33:03.843502939Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:03.846290 containerd[1692]: time="2024-09-04T17:33:03.846235578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=77" Sep 4 17:33:03.850916 containerd[1692]: time="2024-09-04T17:33:03.850874943Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 337.636886ms" Sep 4 17:33:03.851062 containerd[1692]: time="2024-09-04T17:33:03.851044846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 4 17:33:03.854403 containerd[1692]: time="2024-09-04T17:33:03.854339993Z" level=info msg="CreateContainer within sandbox \"0dbb52686bd9d0e38fdb68c2a4a8e5890aad768064c19032113a3e7be500ea1c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:33:03.888140 containerd[1692]: time="2024-09-04T17:33:03.887958969Z" level=info msg="CreateContainer within sandbox \"0dbb52686bd9d0e38fdb68c2a4a8e5890aad768064c19032113a3e7be500ea1c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"212d2aa57255301191994a4058faf1a0f2269d24a0c9d3f5037d0f3a883481bd\"" Sep 4 17:33:03.888903 containerd[1692]: time="2024-09-04T17:33:03.888859782Z" level=info msg="StartContainer for \"212d2aa57255301191994a4058faf1a0f2269d24a0c9d3f5037d0f3a883481bd\"" Sep 4 17:33:03.925384 systemd[1]: Started cri-containerd-212d2aa57255301191994a4058faf1a0f2269d24a0c9d3f5037d0f3a883481bd.scope - libcontainer container 212d2aa57255301191994a4058faf1a0f2269d24a0c9d3f5037d0f3a883481bd. Sep 4 17:33:03.987900 containerd[1692]: time="2024-09-04T17:33:03.987850385Z" level=info msg="StartContainer for \"212d2aa57255301191994a4058faf1a0f2269d24a0c9d3f5037d0f3a883481bd\" returns successfully" Sep 4 17:33:04.074966 kubelet[3282]: I0904 17:33:04.074921 3282 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7768b88989-crtrh" podStartSLOduration=2.822893531 podStartE2EDuration="6.074861619s" podCreationTimestamp="2024-09-04 17:32:58 +0000 UTC" firstStartedPulling="2024-09-04 17:33:00.599445863 +0000 UTC m=+63.399444481" lastFinishedPulling="2024-09-04 17:33:03.851414051 +0000 UTC m=+66.651412569" observedRunningTime="2024-09-04 17:33:04.074426412 +0000 UTC m=+66.874424930" watchObservedRunningTime="2024-09-04 17:33:04.074861619 +0000 UTC m=+66.874860137" Sep 4 17:33:04.598743 kubelet[3282]: I0904 17:33:04.597775 3282 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7768b88989-ph64r" podStartSLOduration=3.64618812 podStartE2EDuration="6.59771443s" podCreationTimestamp="2024-09-04 17:32:58 +0000 UTC" firstStartedPulling="2024-09-04 17:33:00.561432344 +0000 UTC m=+63.361430862" lastFinishedPulling="2024-09-04 17:33:03.512958554 +0000 UTC m=+66.312957172" observedRunningTime="2024-09-04 17:33:04.100137977 +0000 UTC m=+66.900136495" watchObservedRunningTime="2024-09-04 17:33:04.59771443 +0000 UTC m=+67.397713048" Sep 4 17:34:06.802468 systemd[1]: Started sshd@7-10.200.8.34:22-10.200.16.10:60022.service - OpenSSH per-connection server daemon (10.200.16.10:60022). Sep 4 17:34:07.421389 sshd[5823]: Accepted publickey for core from 10.200.16.10 port 60022 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:34:07.423517 sshd[5823]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:07.428647 systemd-logind[1674]: New session 10 of user core. Sep 4 17:34:07.434306 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:34:08.038795 sshd[5823]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:08.043619 systemd[1]: sshd@7-10.200.8.34:22-10.200.16.10:60022.service: Deactivated successfully. Sep 4 17:34:08.047227 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:34:08.048942 systemd-logind[1674]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:34:08.050246 systemd-logind[1674]: Removed session 10. Sep 4 17:34:13.154496 systemd[1]: Started sshd@8-10.200.8.34:22-10.200.16.10:36660.service - OpenSSH per-connection server daemon (10.200.16.10:36660). Sep 4 17:34:13.801083 sshd[5851]: Accepted publickey for core from 10.200.16.10 port 36660 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:34:13.803054 sshd[5851]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:13.807991 systemd-logind[1674]: New session 11 of user core. Sep 4 17:34:13.814352 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:34:14.310146 sshd[5851]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:14.313883 systemd[1]: sshd@8-10.200.8.34:22-10.200.16.10:36660.service: Deactivated successfully. Sep 4 17:34:14.316603 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:34:14.318733 systemd-logind[1674]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:34:14.319928 systemd-logind[1674]: Removed session 11. Sep 4 17:34:19.427490 systemd[1]: Started sshd@9-10.200.8.34:22-10.200.16.10:48556.service - OpenSSH per-connection server daemon (10.200.16.10:48556). Sep 4 17:34:20.045763 sshd[5866]: Accepted publickey for core from 10.200.16.10 port 48556 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:34:20.047569 sshd[5866]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:20.052284 systemd-logind[1674]: New session 12 of user core. Sep 4 17:34:20.058344 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:34:20.547788 sshd[5866]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:20.552441 systemd[1]: sshd@9-10.200.8.34:22-10.200.16.10:48556.service: Deactivated successfully. Sep 4 17:34:20.554788 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:34:20.555723 systemd-logind[1674]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:34:20.556848 systemd-logind[1674]: Removed session 12. Sep 4 17:34:22.132931 systemd[1]: run-containerd-runc-k8s.io-dfc1acd449aeea02c2ad02a075111066b0f14930e5ce6d88523f929f186c7aa8-runc.kxVwr6.mount: Deactivated successfully. Sep 4 17:34:25.659433 systemd[1]: Started sshd@10-10.200.8.34:22-10.200.16.10:48566.service - OpenSSH per-connection server daemon (10.200.16.10:48566). Sep 4 17:34:26.287422 sshd[5922]: Accepted publickey for core from 10.200.16.10 port 48566 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:34:26.289402 sshd[5922]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:26.296747 systemd-logind[1674]: New session 13 of user core. Sep 4 17:34:26.302593 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:34:26.792824 sshd[5922]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:26.797126 systemd[1]: sshd@10-10.200.8.34:22-10.200.16.10:48566.service: Deactivated successfully. Sep 4 17:34:26.799469 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:34:26.800335 systemd-logind[1674]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:34:26.801387 systemd-logind[1674]: Removed session 13. Sep 4 17:34:26.909535 systemd[1]: Started sshd@11-10.200.8.34:22-10.200.16.10:48576.service - OpenSSH per-connection server daemon (10.200.16.10:48576). Sep 4 17:34:27.534141 sshd[5939]: Accepted publickey for core from 10.200.16.10 port 48576 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:34:27.535930 sshd[5939]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:27.540613 systemd-logind[1674]: New session 14 of user core. Sep 4 17:34:27.546330 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:34:28.071475 sshd[5939]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:28.075747 systemd[1]: sshd@11-10.200.8.34:22-10.200.16.10:48576.service: Deactivated successfully. Sep 4 17:34:28.078071 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:34:28.078939 systemd-logind[1674]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:34:28.080213 systemd-logind[1674]: Removed session 14. Sep 4 17:34:28.186497 systemd[1]: Started sshd@12-10.200.8.34:22-10.200.16.10:48578.service - OpenSSH per-connection server daemon (10.200.16.10:48578). Sep 4 17:34:28.804297 sshd[5950]: Accepted publickey for core from 10.200.16.10 port 48578 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:34:28.806023 sshd[5950]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:28.811132 systemd-logind[1674]: New session 15 of user core. Sep 4 17:34:28.814367 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:34:29.304970 sshd[5950]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:29.308766 systemd[1]: sshd@12-10.200.8.34:22-10.200.16.10:48578.service: Deactivated successfully. Sep 4 17:34:29.311930 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:34:29.313716 systemd-logind[1674]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:34:29.314839 systemd-logind[1674]: Removed session 15. Sep 4 17:34:31.425498 systemd[1]: run-containerd-runc-k8s.io-f905048a252af60a86ede11b544f75fb91b1c6060590b42e094b35696511aceb-runc.wMDgqy.mount: Deactivated successfully. Sep 4 17:34:34.420473 systemd[1]: Started sshd@13-10.200.8.34:22-10.200.16.10:51758.service - OpenSSH per-connection server daemon (10.200.16.10:51758). Sep 4 17:34:35.039258 sshd[5996]: Accepted publickey for core from 10.200.16.10 port 51758 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:34:35.040965 sshd[5996]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:35.045216 systemd-logind[1674]: New session 16 of user core. Sep 4 17:34:35.054335 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:34:35.541384 sshd[5996]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:35.547418 systemd[1]: sshd@13-10.200.8.34:22-10.200.16.10:51758.service: Deactivated successfully. Sep 4 17:34:35.550734 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:34:35.552463 systemd-logind[1674]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:34:35.553571 systemd-logind[1674]: Removed session 16. Sep 4 17:34:40.656473 systemd[1]: Started sshd@14-10.200.8.34:22-10.200.16.10:47576.service - OpenSSH per-connection server daemon (10.200.16.10:47576). Sep 4 17:34:41.277085 sshd[6009]: Accepted publickey for core from 10.200.16.10 port 47576 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:34:41.278994 sshd[6009]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:41.283071 systemd-logind[1674]: New session 17 of user core. Sep 4 17:34:41.288320 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:34:41.785275 sshd[6009]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:41.788468 systemd[1]: sshd@14-10.200.8.34:22-10.200.16.10:47576.service: Deactivated successfully. Sep 4 17:34:41.790832 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:34:41.792504 systemd-logind[1674]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:34:41.793562 systemd-logind[1674]: Removed session 17. Sep 4 17:34:46.919492 systemd[1]: Started sshd@15-10.200.8.34:22-10.200.16.10:47578.service - OpenSSH per-connection server daemon (10.200.16.10:47578). Sep 4 17:34:47.539271 sshd[6029]: Accepted publickey for core from 10.200.16.10 port 47578 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:34:47.541022 sshd[6029]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:47.546350 systemd-logind[1674]: New session 18 of user core. Sep 4 17:34:47.549380 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:34:48.055023 sshd[6029]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:48.058404 systemd[1]: sshd@15-10.200.8.34:22-10.200.16.10:47578.service: Deactivated successfully. Sep 4 17:34:48.060761 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:34:48.062449 systemd-logind[1674]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:34:48.063645 systemd-logind[1674]: Removed session 18. Sep 4 17:34:48.174468 systemd[1]: Started sshd@16-10.200.8.34:22-10.200.16.10:47592.service - OpenSSH per-connection server daemon (10.200.16.10:47592). Sep 4 17:34:48.792181 sshd[6042]: Accepted publickey for core from 10.200.16.10 port 47592 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:34:48.794068 sshd[6042]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:48.799018 systemd-logind[1674]: New session 19 of user core. Sep 4 17:34:48.801371 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:34:49.352584 sshd[6042]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:49.356554 systemd[1]: sshd@16-10.200.8.34:22-10.200.16.10:47592.service: Deactivated successfully. Sep 4 17:34:49.359423 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:34:49.361143 systemd-logind[1674]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:34:49.362521 systemd-logind[1674]: Removed session 19. Sep 4 17:34:49.467508 systemd[1]: Started sshd@17-10.200.8.34:22-10.200.16.10:50958.service - OpenSSH per-connection server daemon (10.200.16.10:50958). Sep 4 17:34:50.083755 sshd[6053]: Accepted publickey for core from 10.200.16.10 port 50958 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:34:50.085582 sshd[6053]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:50.090557 systemd-logind[1674]: New session 20 of user core. Sep 4 17:34:50.096359 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:34:52.136826 systemd[1]: run-containerd-runc-k8s.io-dfc1acd449aeea02c2ad02a075111066b0f14930e5ce6d88523f929f186c7aa8-runc.kkUkIT.mount: Deactivated successfully. Sep 4 17:34:52.377044 sshd[6053]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:52.381720 systemd[1]: sshd@17-10.200.8.34:22-10.200.16.10:50958.service: Deactivated successfully. Sep 4 17:34:52.383933 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:34:52.384928 systemd-logind[1674]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:34:52.386071 systemd-logind[1674]: Removed session 20. Sep 4 17:34:52.494572 systemd[1]: Started sshd@18-10.200.8.34:22-10.200.16.10:50964.service - OpenSSH per-connection server daemon (10.200.16.10:50964). Sep 4 17:34:53.123385 sshd[6111]: Accepted publickey for core from 10.200.16.10 port 50964 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:34:53.125265 sshd[6111]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:53.130525 systemd-logind[1674]: New session 21 of user core. Sep 4 17:34:53.136334 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:34:53.734537 sshd[6111]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:53.739301 systemd[1]: sshd@18-10.200.8.34:22-10.200.16.10:50964.service: Deactivated successfully. Sep 4 17:34:53.741392 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:34:53.742275 systemd-logind[1674]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:34:53.743398 systemd-logind[1674]: Removed session 21. Sep 4 17:34:53.845743 systemd[1]: Started sshd@19-10.200.8.34:22-10.200.16.10:50966.service - OpenSSH per-connection server daemon (10.200.16.10:50966). Sep 4 17:34:54.473607 sshd[6127]: Accepted publickey for core from 10.200.16.10 port 50966 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:34:54.475193 sshd[6127]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:54.480315 systemd-logind[1674]: New session 22 of user core. Sep 4 17:34:54.485330 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:34:54.978215 sshd[6127]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:54.981643 systemd[1]: sshd@19-10.200.8.34:22-10.200.16.10:50966.service: Deactivated successfully. Sep 4 17:34:54.984128 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:34:54.985995 systemd-logind[1674]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:34:54.987810 systemd-logind[1674]: Removed session 22. Sep 4 17:35:00.098461 systemd[1]: Started sshd@20-10.200.8.34:22-10.200.16.10:52416.service - OpenSSH per-connection server daemon (10.200.16.10:52416). Sep 4 17:35:00.716100 sshd[6145]: Accepted publickey for core from 10.200.16.10 port 52416 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:35:00.717826 sshd[6145]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:35:00.722698 systemd-logind[1674]: New session 23 of user core. Sep 4 17:35:00.729331 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:35:01.221329 sshd[6145]: pam_unix(sshd:session): session closed for user core Sep 4 17:35:01.224954 systemd[1]: sshd@20-10.200.8.34:22-10.200.16.10:52416.service: Deactivated successfully. Sep 4 17:35:01.227758 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:35:01.229630 systemd-logind[1674]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:35:01.231007 systemd-logind[1674]: Removed session 23. Sep 4 17:35:06.339476 systemd[1]: Started sshd@21-10.200.8.34:22-10.200.16.10:52424.service - OpenSSH per-connection server daemon (10.200.16.10:52424). Sep 4 17:35:06.959333 sshd[6180]: Accepted publickey for core from 10.200.16.10 port 52424 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:35:06.961117 sshd[6180]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:35:06.966301 systemd-logind[1674]: New session 24 of user core. Sep 4 17:35:06.970335 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:35:07.458334 sshd[6180]: pam_unix(sshd:session): session closed for user core Sep 4 17:35:07.462080 systemd[1]: sshd@21-10.200.8.34:22-10.200.16.10:52424.service: Deactivated successfully. Sep 4 17:35:07.465086 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:35:07.466865 systemd-logind[1674]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:35:07.467956 systemd-logind[1674]: Removed session 24. Sep 4 17:35:12.586495 systemd[1]: Started sshd@22-10.200.8.34:22-10.200.16.10:43640.service - OpenSSH per-connection server daemon (10.200.16.10:43640). Sep 4 17:35:13.205654 sshd[6195]: Accepted publickey for core from 10.200.16.10 port 43640 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:35:13.207397 sshd[6195]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:35:13.211541 systemd-logind[1674]: New session 25 of user core. Sep 4 17:35:13.219326 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:35:13.703312 sshd[6195]: pam_unix(sshd:session): session closed for user core Sep 4 17:35:13.707827 systemd[1]: sshd@22-10.200.8.34:22-10.200.16.10:43640.service: Deactivated successfully. Sep 4 17:35:13.710133 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:35:13.710981 systemd-logind[1674]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:35:13.712328 systemd-logind[1674]: Removed session 25. Sep 4 17:35:18.820502 systemd[1]: Started sshd@23-10.200.8.34:22-10.200.16.10:49496.service - OpenSSH per-connection server daemon (10.200.16.10:49496). Sep 4 17:35:19.439616 sshd[6212]: Accepted publickey for core from 10.200.16.10 port 49496 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:35:19.441501 sshd[6212]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:35:19.446224 systemd-logind[1674]: New session 26 of user core. Sep 4 17:35:19.452330 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 17:35:19.940110 sshd[6212]: pam_unix(sshd:session): session closed for user core Sep 4 17:35:19.944520 systemd[1]: sshd@23-10.200.8.34:22-10.200.16.10:49496.service: Deactivated successfully. Sep 4 17:35:19.946923 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 17:35:19.947880 systemd-logind[1674]: Session 26 logged out. Waiting for processes to exit. Sep 4 17:35:19.948998 systemd-logind[1674]: Removed session 26. Sep 4 17:35:25.056491 systemd[1]: Started sshd@24-10.200.8.34:22-10.200.16.10:49512.service - OpenSSH per-connection server daemon (10.200.16.10:49512). Sep 4 17:35:25.674925 sshd[6251]: Accepted publickey for core from 10.200.16.10 port 49512 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:35:25.676697 sshd[6251]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:35:25.681652 systemd-logind[1674]: New session 27 of user core. Sep 4 17:35:25.685444 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 17:35:26.176459 sshd[6251]: pam_unix(sshd:session): session closed for user core Sep 4 17:35:26.180579 systemd[1]: sshd@24-10.200.8.34:22-10.200.16.10:49512.service: Deactivated successfully. Sep 4 17:35:26.182798 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 17:35:26.183621 systemd-logind[1674]: Session 27 logged out. Waiting for processes to exit. Sep 4 17:35:26.185122 systemd-logind[1674]: Removed session 27. Sep 4 17:35:31.414006 systemd[1]: run-containerd-runc-k8s.io-f905048a252af60a86ede11b544f75fb91b1c6060590b42e094b35696511aceb-runc.dUBxtD.mount: Deactivated successfully.