Sep 4 17:42:54.080908 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:54:07 -00 2024 Sep 4 17:42:54.080945 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:42:54.080960 kernel: BIOS-provided physical RAM map: Sep 4 17:42:54.080971 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 17:42:54.080981 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 4 17:42:54.080992 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Sep 4 17:42:54.081005 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Sep 4 17:42:54.081020 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Sep 4 17:42:54.081031 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Sep 4 17:42:54.081042 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 4 17:42:54.081054 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 4 17:42:54.081065 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 4 17:42:54.081076 kernel: printk: bootconsole [earlyser0] enabled Sep 4 17:42:54.081088 kernel: NX (Execute Disable) protection: active Sep 4 17:42:54.081105 kernel: APIC: Static calls initialized Sep 4 17:42:54.081118 kernel: efi: EFI v2.7 by Microsoft Sep 4 17:42:54.081131 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Sep 4 17:42:54.081143 kernel: SMBIOS 3.1.0 present. Sep 4 17:42:54.081156 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Sep 4 17:42:54.081188 kernel: Hypervisor detected: Microsoft Hyper-V Sep 4 17:42:54.081201 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Sep 4 17:42:54.081213 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Sep 4 17:42:54.081225 kernel: Hyper-V: Nested features: 0x1e0101 Sep 4 17:42:54.081238 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 4 17:42:54.081254 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 4 17:42:54.081266 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 4 17:42:54.081279 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 4 17:42:54.081293 kernel: tsc: Marking TSC unstable due to running on Hyper-V Sep 4 17:42:54.081306 kernel: tsc: Detected 2593.908 MHz processor Sep 4 17:42:54.081318 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 17:42:54.081331 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 17:42:54.081344 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Sep 4 17:42:54.081357 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 4 17:42:54.081372 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 17:42:54.081385 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Sep 4 17:42:54.081398 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Sep 4 17:42:54.081410 kernel: Using GB pages for direct mapping Sep 4 17:42:54.081423 kernel: Secure boot disabled Sep 4 17:42:54.081436 kernel: ACPI: Early table checksum verification disabled Sep 4 17:42:54.081448 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 4 17:42:54.081466 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:42:54.081483 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:42:54.081496 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Sep 4 17:42:54.081510 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 4 17:42:54.081524 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:42:54.081537 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:42:54.081551 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:42:54.081567 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:42:54.081581 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:42:54.081594 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:42:54.081608 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:42:54.081621 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 4 17:42:54.081635 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Sep 4 17:42:54.081649 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 4 17:42:54.081663 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 4 17:42:54.081679 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 4 17:42:54.081692 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 4 17:42:54.081706 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Sep 4 17:42:54.081720 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Sep 4 17:42:54.081734 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 4 17:42:54.081747 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Sep 4 17:42:54.081761 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 4 17:42:54.081774 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 4 17:42:54.081788 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Sep 4 17:42:54.081804 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Sep 4 17:42:54.081818 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Sep 4 17:42:54.081832 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Sep 4 17:42:54.081845 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Sep 4 17:42:54.081859 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Sep 4 17:42:54.081872 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Sep 4 17:42:54.081886 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Sep 4 17:42:54.081900 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Sep 4 17:42:54.081913 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Sep 4 17:42:54.081929 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Sep 4 17:42:54.081943 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Sep 4 17:42:54.081957 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Sep 4 17:42:54.081970 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Sep 4 17:42:54.081984 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Sep 4 17:42:54.081998 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Sep 4 17:42:54.082011 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Sep 4 17:42:54.082025 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Sep 4 17:42:54.082038 kernel: Zone ranges: Sep 4 17:42:54.082055 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 17:42:54.082068 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 4 17:42:54.082082 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 4 17:42:54.082096 kernel: Movable zone start for each node Sep 4 17:42:54.082109 kernel: Early memory node ranges Sep 4 17:42:54.082123 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 4 17:42:54.082136 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Sep 4 17:42:54.082150 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 4 17:42:54.082163 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 4 17:42:54.082187 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 4 17:42:54.082200 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:42:54.082214 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 4 17:42:54.082227 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Sep 4 17:42:54.082241 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 4 17:42:54.082254 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Sep 4 17:42:54.082268 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Sep 4 17:42:54.082282 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 17:42:54.082295 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 17:42:54.082312 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 4 17:42:54.082326 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 4 17:42:54.082339 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 4 17:42:54.082353 kernel: Booting paravirtualized kernel on Hyper-V Sep 4 17:42:54.082367 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 17:42:54.082381 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 17:42:54.082394 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Sep 4 17:42:54.082408 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Sep 4 17:42:54.082421 kernel: pcpu-alloc: [0] 0 1 Sep 4 17:42:54.082437 kernel: Hyper-V: PV spinlocks enabled Sep 4 17:42:54.082450 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 17:42:54.082466 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:42:54.082480 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:42:54.082493 kernel: random: crng init done Sep 4 17:42:54.082506 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 4 17:42:54.082520 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:42:54.082534 kernel: Fallback order for Node 0: 0 Sep 4 17:42:54.082550 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Sep 4 17:42:54.082574 kernel: Policy zone: Normal Sep 4 17:42:54.082590 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:42:54.082605 kernel: software IO TLB: area num 2. Sep 4 17:42:54.082619 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2304K rwdata, 22708K rodata, 42704K init, 2488K bss, 310124K reserved, 0K cma-reserved) Sep 4 17:42:54.082634 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 17:42:54.082648 kernel: ftrace: allocating 37748 entries in 148 pages Sep 4 17:42:54.082663 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 17:42:54.082677 kernel: Dynamic Preempt: voluntary Sep 4 17:42:54.082691 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:42:54.082707 kernel: rcu: RCU event tracing is enabled. Sep 4 17:42:54.082724 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 17:42:54.082739 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:42:54.082754 kernel: Rude variant of Tasks RCU enabled. Sep 4 17:42:54.082768 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:42:54.082783 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:42:54.082800 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 17:42:54.082814 kernel: Using NULL legacy PIC Sep 4 17:42:54.082829 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 4 17:42:54.082843 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:42:54.082858 kernel: Console: colour dummy device 80x25 Sep 4 17:42:54.082872 kernel: printk: console [tty1] enabled Sep 4 17:42:54.082887 kernel: printk: console [ttyS0] enabled Sep 4 17:42:54.082901 kernel: printk: bootconsole [earlyser0] disabled Sep 4 17:42:54.082915 kernel: ACPI: Core revision 20230628 Sep 4 17:42:54.082930 kernel: Failed to register legacy timer interrupt Sep 4 17:42:54.082947 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 17:42:54.082962 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 4 17:42:54.082976 kernel: Hyper-V: Using IPI hypercalls Sep 4 17:42:54.082991 kernel: APIC: send_IPI() replaced with hv_send_ipi() Sep 4 17:42:54.083005 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Sep 4 17:42:54.083020 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Sep 4 17:42:54.083035 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Sep 4 17:42:54.083049 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Sep 4 17:42:54.083064 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Sep 4 17:42:54.083082 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593908) Sep 4 17:42:54.083096 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 4 17:42:54.083111 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Sep 4 17:42:54.083125 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 17:42:54.083140 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 17:42:54.083154 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 17:42:54.083174 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 4 17:42:54.083190 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 4 17:42:54.083204 kernel: RETBleed: Vulnerable Sep 4 17:42:54.083221 kernel: Speculative Store Bypass: Vulnerable Sep 4 17:42:54.083236 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 17:42:54.083250 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 17:42:54.083264 kernel: GDS: Unknown: Dependent on hypervisor status Sep 4 17:42:54.083279 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 17:42:54.083293 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 17:42:54.083308 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 17:42:54.083322 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 4 17:42:54.083337 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 4 17:42:54.083351 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 4 17:42:54.083365 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 17:42:54.083383 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 4 17:42:54.083397 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 4 17:42:54.083411 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 4 17:42:54.083426 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Sep 4 17:42:54.083440 kernel: Freeing SMP alternatives memory: 32K Sep 4 17:42:54.083454 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:42:54.083468 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 17:42:54.083483 kernel: landlock: Up and running. Sep 4 17:42:54.083497 kernel: SELinux: Initializing. Sep 4 17:42:54.083511 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 17:42:54.083526 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 17:42:54.083541 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 4 17:42:54.083558 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:42:54.083572 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:42:54.083587 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:42:54.083601 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 4 17:42:54.083616 kernel: signal: max sigframe size: 3632 Sep 4 17:42:54.083631 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:42:54.083645 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:42:54.083660 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 4 17:42:54.083674 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:42:54.083691 kernel: smpboot: x86: Booting SMP configuration: Sep 4 17:42:54.083706 kernel: .... node #0, CPUs: #1 Sep 4 17:42:54.083721 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Sep 4 17:42:54.083736 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 4 17:42:54.083750 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 17:42:54.083765 kernel: smpboot: Max logical packages: 1 Sep 4 17:42:54.083779 kernel: smpboot: Total of 2 processors activated (10375.63 BogoMIPS) Sep 4 17:42:54.083794 kernel: devtmpfs: initialized Sep 4 17:42:54.083811 kernel: x86/mm: Memory block size: 128MB Sep 4 17:42:54.083825 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 4 17:42:54.083840 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:42:54.083855 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 17:42:54.083869 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:42:54.083884 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:42:54.083898 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:42:54.083913 kernel: audit: type=2000 audit(1725471772.028:1): state=initialized audit_enabled=0 res=1 Sep 4 17:42:54.083927 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:42:54.083944 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 17:42:54.083959 kernel: cpuidle: using governor menu Sep 4 17:42:54.083973 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:42:54.083987 kernel: dca service started, version 1.12.1 Sep 4 17:42:54.084002 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Sep 4 17:42:54.084017 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 17:42:54.084031 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:42:54.084046 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:42:54.084060 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:42:54.084078 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:42:54.084092 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:42:54.084107 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:42:54.084121 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:42:54.084135 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:42:54.084150 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:42:54.084165 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 17:42:54.084186 kernel: ACPI: Interpreter enabled Sep 4 17:42:54.084201 kernel: ACPI: PM: (supports S0 S5) Sep 4 17:42:54.084218 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 17:42:54.084233 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 17:42:54.084248 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 4 17:42:54.084262 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 4 17:42:54.084276 kernel: iommu: Default domain type: Translated Sep 4 17:42:54.084291 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 17:42:54.084306 kernel: efivars: Registered efivars operations Sep 4 17:42:54.084320 kernel: PCI: Using ACPI for IRQ routing Sep 4 17:42:54.084334 kernel: PCI: System does not support PCI Sep 4 17:42:54.084348 kernel: vgaarb: loaded Sep 4 17:42:54.084361 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Sep 4 17:42:54.084375 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:42:54.084387 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:42:54.084400 kernel: pnp: PnP ACPI init Sep 4 17:42:54.084414 kernel: pnp: PnP ACPI: found 3 devices Sep 4 17:42:54.084428 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 17:42:54.084443 kernel: NET: Registered PF_INET protocol family Sep 4 17:42:54.084457 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 17:42:54.084475 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 4 17:42:54.084490 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:42:54.084504 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:42:54.084519 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 4 17:42:54.084534 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 4 17:42:54.084548 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 17:42:54.084563 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 17:42:54.084577 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:42:54.084592 kernel: NET: Registered PF_XDP protocol family Sep 4 17:42:54.084610 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:42:54.084625 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 4 17:42:54.084640 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Sep 4 17:42:54.084654 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 4 17:42:54.084668 kernel: Initialise system trusted keyrings Sep 4 17:42:54.084682 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 4 17:42:54.084696 kernel: Key type asymmetric registered Sep 4 17:42:54.084711 kernel: Asymmetric key parser 'x509' registered Sep 4 17:42:54.084725 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 17:42:54.084743 kernel: io scheduler mq-deadline registered Sep 4 17:42:54.084757 kernel: io scheduler kyber registered Sep 4 17:42:54.084771 kernel: io scheduler bfq registered Sep 4 17:42:54.084786 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 17:42:54.084800 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:42:54.084815 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 17:42:54.084830 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 4 17:42:54.084844 kernel: i8042: PNP: No PS/2 controller found. Sep 4 17:42:54.085035 kernel: rtc_cmos 00:02: registered as rtc0 Sep 4 17:42:54.085230 kernel: rtc_cmos 00:02: setting system clock to 2024-09-04T17:42:53 UTC (1725471773) Sep 4 17:42:54.085371 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 4 17:42:54.085392 kernel: intel_pstate: CPU model not supported Sep 4 17:42:54.085407 kernel: efifb: probing for efifb Sep 4 17:42:54.085421 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 4 17:42:54.085434 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 4 17:42:54.085448 kernel: efifb: scrolling: redraw Sep 4 17:42:54.085462 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 4 17:42:54.085480 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 17:42:54.085494 kernel: fb0: EFI VGA frame buffer device Sep 4 17:42:54.085508 kernel: pstore: Using crash dump compression: deflate Sep 4 17:42:54.085521 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 17:42:54.085535 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:42:54.085549 kernel: Segment Routing with IPv6 Sep 4 17:42:54.085563 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:42:54.085577 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:42:54.085591 kernel: Key type dns_resolver registered Sep 4 17:42:54.085607 kernel: IPI shorthand broadcast: enabled Sep 4 17:42:54.085621 kernel: sched_clock: Marking stable (849003000, 48825900)->(1116068200, -218239300) Sep 4 17:42:54.085635 kernel: registered taskstats version 1 Sep 4 17:42:54.085648 kernel: Loading compiled-in X.509 certificates Sep 4 17:42:54.085662 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 8669771ab5e11f458b79e6634fe685dacc266b18' Sep 4 17:42:54.085677 kernel: Key type .fscrypt registered Sep 4 17:42:54.085691 kernel: Key type fscrypt-provisioning registered Sep 4 17:42:54.085707 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:42:54.085726 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:42:54.085741 kernel: ima: No architecture policies found Sep 4 17:42:54.085755 kernel: clk: Disabling unused clocks Sep 4 17:42:54.085769 kernel: Freeing unused kernel image (initmem) memory: 42704K Sep 4 17:42:54.085783 kernel: Write protecting the kernel read-only data: 36864k Sep 4 17:42:54.085796 kernel: Freeing unused kernel image (rodata/data gap) memory: 1868K Sep 4 17:42:54.085810 kernel: Run /init as init process Sep 4 17:42:54.085823 kernel: with arguments: Sep 4 17:42:54.085836 kernel: /init Sep 4 17:42:54.085850 kernel: with environment: Sep 4 17:42:54.085866 kernel: HOME=/ Sep 4 17:42:54.085879 kernel: TERM=linux Sep 4 17:42:54.085892 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:42:54.085909 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:42:54.085925 systemd[1]: Detected virtualization microsoft. Sep 4 17:42:54.085940 systemd[1]: Detected architecture x86-64. Sep 4 17:42:54.085953 systemd[1]: Running in initrd. Sep 4 17:42:54.085970 systemd[1]: No hostname configured, using default hostname. Sep 4 17:42:54.085985 systemd[1]: Hostname set to . Sep 4 17:42:54.085999 systemd[1]: Initializing machine ID from random generator. Sep 4 17:42:54.086013 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:42:54.086028 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:42:54.086043 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:42:54.086059 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:42:54.086073 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:42:54.086091 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:42:54.086106 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:42:54.086122 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:42:54.086137 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:42:54.086152 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:42:54.086205 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:42:54.086221 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:42:54.086240 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:42:54.086254 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:42:54.086269 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:42:54.086284 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:42:54.086298 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:42:54.086313 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:42:54.086329 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:42:54.086344 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:42:54.086359 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:42:54.086377 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:42:54.086392 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:42:54.086407 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:42:54.086422 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:42:54.086437 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:42:54.086451 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:42:54.086467 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:42:54.086482 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:42:54.086500 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:42:54.086542 systemd-journald[176]: Collecting audit messages is disabled. Sep 4 17:42:54.086576 systemd-journald[176]: Journal started Sep 4 17:42:54.086611 systemd-journald[176]: Runtime Journal (/run/log/journal/1de929accbee4764b6ec59fbdac3c880) is 8.0M, max 158.8M, 150.8M free. Sep 4 17:42:54.097040 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:42:54.096699 systemd-modules-load[177]: Inserted module 'overlay' Sep 4 17:42:54.098508 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:42:54.101878 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:42:54.105595 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:42:54.132446 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:42:54.138904 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:42:54.138933 kernel: Bridge firewalling registered Sep 4 17:42:54.142433 systemd-modules-load[177]: Inserted module 'br_netfilter' Sep 4 17:42:54.151395 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:42:54.158684 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:42:54.161878 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:42:54.162136 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:42:54.165423 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:42:54.177113 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:42:54.187332 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:42:54.205848 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:42:54.214292 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:42:54.220479 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:42:54.227512 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:42:54.235450 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:42:54.243342 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:42:54.255964 dracut-cmdline[212]: dracut-dracut-053 Sep 4 17:42:54.258949 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:42:54.302157 systemd-resolved[217]: Positive Trust Anchors: Sep 4 17:42:54.302291 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:42:54.302346 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:42:54.328419 systemd-resolved[217]: Defaulting to hostname 'linux'. Sep 4 17:42:54.331792 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:42:54.338311 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:42:54.351184 kernel: SCSI subsystem initialized Sep 4 17:42:54.361191 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:42:54.372201 kernel: iscsi: registered transport (tcp) Sep 4 17:42:54.393821 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:42:54.393907 kernel: QLogic iSCSI HBA Driver Sep 4 17:42:54.429861 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:42:54.439484 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:42:54.466573 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:42:54.466667 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:42:54.470235 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:42:54.511196 kernel: raid6: avx512x4 gen() 18675 MB/s Sep 4 17:42:54.531187 kernel: raid6: avx512x2 gen() 18701 MB/s Sep 4 17:42:54.550179 kernel: raid6: avx512x1 gen() 18716 MB/s Sep 4 17:42:54.569180 kernel: raid6: avx2x4 gen() 18694 MB/s Sep 4 17:42:54.588185 kernel: raid6: avx2x2 gen() 18682 MB/s Sep 4 17:42:54.608289 kernel: raid6: avx2x1 gen() 14241 MB/s Sep 4 17:42:54.608325 kernel: raid6: using algorithm avx512x1 gen() 18716 MB/s Sep 4 17:42:54.629217 kernel: raid6: .... xor() 25532 MB/s, rmw enabled Sep 4 17:42:54.629248 kernel: raid6: using avx512x2 recovery algorithm Sep 4 17:42:54.652201 kernel: xor: automatically using best checksumming function avx Sep 4 17:42:54.805196 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:42:54.814594 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:42:54.823346 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:42:54.842718 systemd-udevd[397]: Using default interface naming scheme 'v255'. Sep 4 17:42:54.849606 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:42:54.861378 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:42:54.875942 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Sep 4 17:42:54.905505 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:42:54.913453 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:42:54.954115 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:42:54.967789 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:42:55.003843 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:42:55.012705 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:42:55.016101 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:42:55.019664 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:42:55.032657 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:42:55.060196 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 17:42:55.064483 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:42:55.073853 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:42:55.074065 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:42:55.084586 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:42:55.087754 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:42:55.105259 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 17:42:55.088029 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:42:55.111982 kernel: AES CTR mode by8 optimization enabled Sep 4 17:42:55.091740 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:42:55.106488 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:42:55.116846 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:42:55.131205 kernel: hv_vmbus: Vmbus version:5.2 Sep 4 17:42:55.116930 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:42:55.135628 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:42:55.164467 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 17:42:55.164521 kernel: hv_vmbus: registering driver hid_hyperv Sep 4 17:42:55.171608 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Sep 4 17:42:55.181015 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 4 17:42:55.181073 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 4 17:42:55.182487 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:42:55.193474 kernel: hv_vmbus: registering driver hv_netvsc Sep 4 17:42:55.201188 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 4 17:42:55.201235 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 4 17:42:55.201248 kernel: hv_vmbus: registering driver hv_storvsc Sep 4 17:42:55.201480 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:42:55.216189 kernel: scsi host0: storvsc_host_t Sep 4 17:42:55.216256 kernel: scsi host1: storvsc_host_t Sep 4 17:42:55.224077 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Sep 4 17:42:55.224669 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 4 17:42:55.234187 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 4 17:42:55.234943 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:42:55.254193 kernel: PTP clock support registered Sep 4 17:42:55.267088 kernel: hv_utils: Registering HyperV Utility Driver Sep 4 17:42:55.267158 kernel: hv_vmbus: registering driver hv_utils Sep 4 17:42:55.273639 kernel: hv_utils: Shutdown IC version 3.2 Sep 4 17:42:55.273712 kernel: hv_utils: Heartbeat IC version 3.0 Sep 4 17:42:55.275795 kernel: hv_utils: TimeSync IC version 4.0 Sep 4 17:42:55.938619 systemd-resolved[217]: Clock change detected. Flushing caches. Sep 4 17:42:55.948231 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 4 17:42:55.948498 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 17:42:55.949951 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 4 17:42:55.965954 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 4 17:42:55.966230 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 4 17:42:55.972647 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 4 17:42:55.972926 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 4 17:42:55.973148 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 4 17:42:55.979948 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:42:55.979994 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 4 17:42:56.084602 kernel: hv_netvsc 6045bdd0-ee91-6045-bdd0-ee916045bdd0 eth0: VF slot 1 added Sep 4 17:42:56.095980 kernel: hv_vmbus: registering driver hv_pci Sep 4 17:42:56.096052 kernel: hv_pci 45ba88dd-139e-4a44-8beb-163699efff2c: PCI VMBus probing: Using version 0x10004 Sep 4 17:42:56.103612 kernel: hv_pci 45ba88dd-139e-4a44-8beb-163699efff2c: PCI host bridge to bus 139e:00 Sep 4 17:42:56.103897 kernel: pci_bus 139e:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Sep 4 17:42:56.106855 kernel: pci_bus 139e:00: No busn resource found for root bus, will use [bus 00-ff] Sep 4 17:42:56.111076 kernel: pci 139e:00:02.0: [15b3:1016] type 00 class 0x020000 Sep 4 17:42:56.115978 kernel: pci 139e:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 4 17:42:56.119945 kernel: pci 139e:00:02.0: enabling Extended Tags Sep 4 17:42:56.130187 kernel: pci 139e:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 139e:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Sep 4 17:42:56.136446 kernel: pci_bus 139e:00: busn_res: [bus 00-ff] end is updated to 00 Sep 4 17:42:56.136667 kernel: pci 139e:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 4 17:42:56.312066 kernel: mlx5_core 139e:00:02.0: enabling device (0000 -> 0002) Sep 4 17:42:56.315964 kernel: mlx5_core 139e:00:02.0: firmware version: 14.30.1284 Sep 4 17:42:56.529127 kernel: hv_netvsc 6045bdd0-ee91-6045-bdd0-ee916045bdd0 eth0: VF registering: eth1 Sep 4 17:42:56.529466 kernel: mlx5_core 139e:00:02.0 eth1: joined to eth0 Sep 4 17:42:56.534999 kernel: mlx5_core 139e:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 4 17:42:56.546962 kernel: mlx5_core 139e:00:02.0 enP5022s1: renamed from eth1 Sep 4 17:42:56.553105 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 4 17:42:56.668967 kernel: BTRFS: device fsid 0dc40443-7f77-4fa7-b5e4-579d4bba0772 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (444) Sep 4 17:42:56.677278 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 4 17:42:56.691686 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 4 17:42:56.691825 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 4 17:42:56.711204 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:42:56.722951 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (457) Sep 4 17:42:56.749325 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 4 17:42:57.740230 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:42:57.740305 disk-uuid[599]: The operation has completed successfully. Sep 4 17:42:57.824039 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:42:57.824167 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:42:57.851095 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:42:57.856740 sh[716]: Success Sep 4 17:42:57.886959 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 4 17:42:58.090290 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:42:58.105081 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:42:58.107804 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:42:58.129948 kernel: BTRFS info (device dm-0): first mount of filesystem 0dc40443-7f77-4fa7-b5e4-579d4bba0772 Sep 4 17:42:58.129993 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:42:58.135321 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:42:58.138501 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:42:58.141167 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:42:58.461388 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:42:58.467775 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:42:58.477079 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:42:58.485748 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:42:58.504729 kernel: BTRFS info (device sda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:42:58.504784 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:42:58.504811 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:42:58.527367 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:42:58.540174 kernel: BTRFS info (device sda6): last unmount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:42:58.539748 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:42:58.547423 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:42:58.559074 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:42:58.580971 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:42:58.592114 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:42:58.611432 systemd-networkd[900]: lo: Link UP Sep 4 17:42:58.611442 systemd-networkd[900]: lo: Gained carrier Sep 4 17:42:58.613431 systemd-networkd[900]: Enumeration completed Sep 4 17:42:58.613687 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:42:58.616463 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:42:58.616469 systemd-networkd[900]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:42:58.622910 systemd[1]: Reached target network.target - Network. Sep 4 17:42:58.679955 kernel: mlx5_core 139e:00:02.0 enP5022s1: Link up Sep 4 17:42:58.715018 kernel: hv_netvsc 6045bdd0-ee91-6045-bdd0-ee916045bdd0 eth0: Data path switched to VF: enP5022s1 Sep 4 17:42:58.715366 systemd-networkd[900]: enP5022s1: Link UP Sep 4 17:42:58.715724 systemd-networkd[900]: eth0: Link UP Sep 4 17:42:58.715882 systemd-networkd[900]: eth0: Gained carrier Sep 4 17:42:58.715893 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:42:58.729180 systemd-networkd[900]: enP5022s1: Gained carrier Sep 4 17:42:58.747972 systemd-networkd[900]: eth0: DHCPv4 address 10.200.4.10/24, gateway 10.200.4.1 acquired from 168.63.129.16 Sep 4 17:42:59.455148 ignition[870]: Ignition 2.19.0 Sep 4 17:42:59.455160 ignition[870]: Stage: fetch-offline Sep 4 17:42:59.456987 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:42:59.455202 ignition[870]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:42:59.455212 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:42:59.455314 ignition[870]: parsed url from cmdline: "" Sep 4 17:42:59.455318 ignition[870]: no config URL provided Sep 4 17:42:59.455325 ignition[870]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:42:59.455335 ignition[870]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:42:59.455342 ignition[870]: failed to fetch config: resource requires networking Sep 4 17:42:59.455557 ignition[870]: Ignition finished successfully Sep 4 17:42:59.489098 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 17:42:59.502400 ignition[909]: Ignition 2.19.0 Sep 4 17:42:59.502411 ignition[909]: Stage: fetch Sep 4 17:42:59.502638 ignition[909]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:42:59.502651 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:42:59.502742 ignition[909]: parsed url from cmdline: "" Sep 4 17:42:59.502745 ignition[909]: no config URL provided Sep 4 17:42:59.502749 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:42:59.502756 ignition[909]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:42:59.502775 ignition[909]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 4 17:42:59.587471 ignition[909]: GET result: OK Sep 4 17:42:59.587845 ignition[909]: config has been read from IMDS userdata Sep 4 17:42:59.587890 ignition[909]: parsing config with SHA512: b22bfb83cf4a2b275e453933079f684503d030cc5d6f3f6b55dcce808517b4e7329d35094149b21b7eaf7da695a05d71126e709b995734482ae9974b86c6a5c4 Sep 4 17:42:59.595017 unknown[909]: fetched base config from "system" Sep 4 17:42:59.595033 unknown[909]: fetched base config from "system" Sep 4 17:42:59.595043 unknown[909]: fetched user config from "azure" Sep 4 17:42:59.599595 ignition[909]: fetch: fetch complete Sep 4 17:42:59.599602 ignition[909]: fetch: fetch passed Sep 4 17:42:59.603751 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 17:42:59.601382 ignition[909]: Ignition finished successfully Sep 4 17:42:59.620135 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:42:59.639122 ignition[915]: Ignition 2.19.0 Sep 4 17:42:59.639133 ignition[915]: Stage: kargs Sep 4 17:42:59.639349 ignition[915]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:42:59.641268 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:42:59.639362 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:42:59.640244 ignition[915]: kargs: kargs passed Sep 4 17:42:59.640293 ignition[915]: Ignition finished successfully Sep 4 17:42:59.653159 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:42:59.671167 ignition[921]: Ignition 2.19.0 Sep 4 17:42:59.671178 ignition[921]: Stage: disks Sep 4 17:42:59.673205 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:42:59.671405 ignition[921]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:42:59.676549 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:42:59.671419 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:42:59.680810 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:42:59.672279 ignition[921]: disks: disks passed Sep 4 17:42:59.684092 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:42:59.672324 ignition[921]: Ignition finished successfully Sep 4 17:42:59.704769 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:42:59.707173 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:42:59.719095 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:42:59.775234 systemd-networkd[900]: enP5022s1: Gained IPv6LL Sep 4 17:42:59.779077 systemd-fsck[929]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 4 17:42:59.783998 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:42:59.795139 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:42:59.887951 kernel: EXT4-fs (sda9): mounted filesystem bdbe0f61-2675-40b7-b9ae-5653402e9b23 r/w with ordered data mode. Quota mode: none. Sep 4 17:42:59.888133 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:42:59.893109 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:42:59.949053 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:42:59.953594 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:42:59.961509 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 4 17:42:59.980330 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (940) Sep 4 17:42:59.980362 kernel: BTRFS info (device sda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:42:59.980379 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:42:59.980395 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:42:59.985367 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:42:59.977383 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:42:59.977417 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:42:59.987118 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:42:59.997225 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:43:00.009105 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:43:00.159241 systemd-networkd[900]: eth0: Gained IPv6LL Sep 4 17:43:00.670709 coreos-metadata[942]: Sep 04 17:43:00.670 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 4 17:43:00.677204 coreos-metadata[942]: Sep 04 17:43:00.677 INFO Fetch successful Sep 4 17:43:00.677204 coreos-metadata[942]: Sep 04 17:43:00.677 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 4 17:43:00.696347 coreos-metadata[942]: Sep 04 17:43:00.696 INFO Fetch successful Sep 4 17:43:00.728813 coreos-metadata[942]: Sep 04 17:43:00.728 INFO wrote hostname ci-4054.1.0-a-c31d97b133 to /sysroot/etc/hostname Sep 4 17:43:00.731012 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 17:43:00.799550 initrd-setup-root[969]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:43:00.855997 initrd-setup-root[976]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:43:00.862020 initrd-setup-root[983]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:43:00.867659 initrd-setup-root[990]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:43:01.852110 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:43:01.863053 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:43:01.869108 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:43:01.877208 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:43:01.883737 kernel: BTRFS info (device sda6): last unmount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:43:01.902798 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:43:01.914155 ignition[1059]: INFO : Ignition 2.19.0 Sep 4 17:43:01.914155 ignition[1059]: INFO : Stage: mount Sep 4 17:43:01.921596 ignition[1059]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:43:01.921596 ignition[1059]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:43:01.921596 ignition[1059]: INFO : mount: mount passed Sep 4 17:43:01.921596 ignition[1059]: INFO : Ignition finished successfully Sep 4 17:43:01.916080 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:43:01.934945 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:43:01.941711 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:43:01.957961 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1069) Sep 4 17:43:01.958005 kernel: BTRFS info (device sda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:43:01.967122 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:43:01.967176 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:43:01.972950 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:43:01.974185 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:43:01.996911 ignition[1086]: INFO : Ignition 2.19.0 Sep 4 17:43:01.996911 ignition[1086]: INFO : Stage: files Sep 4 17:43:02.001344 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:43:02.001344 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:43:02.001344 ignition[1086]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:43:02.001344 ignition[1086]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:43:02.001344 ignition[1086]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:43:02.075872 ignition[1086]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:43:02.080166 ignition[1086]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:43:02.080166 ignition[1086]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:43:02.076512 unknown[1086]: wrote ssh authorized keys file for user: core Sep 4 17:43:02.157313 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:43:02.162813 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 17:43:02.185836 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:43:02.242103 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:43:02.248179 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:43:02.248179 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:43:02.248179 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:43:02.248179 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:43:02.248179 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:43:02.248179 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:43:02.248179 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:43:02.248179 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:43:02.248179 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:43:02.248179 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:43:02.248179 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:43:02.248179 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:43:02.248179 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:43:02.248179 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Sep 4 17:43:02.744732 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 17:43:03.069378 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:43:03.069378 ignition[1086]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 17:43:03.087600 ignition[1086]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:43:03.095613 ignition[1086]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:43:03.095613 ignition[1086]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 17:43:03.095613 ignition[1086]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:43:03.095613 ignition[1086]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:43:03.095613 ignition[1086]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:43:03.095613 ignition[1086]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:43:03.095613 ignition[1086]: INFO : files: files passed Sep 4 17:43:03.095613 ignition[1086]: INFO : Ignition finished successfully Sep 4 17:43:03.089481 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:43:03.112178 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:43:03.125086 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:43:03.128639 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:43:03.128752 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:43:03.168127 initrd-setup-root-after-ignition[1115]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:43:03.168127 initrd-setup-root-after-ignition[1115]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:43:03.177985 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:43:03.184533 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:43:03.187851 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:43:03.201088 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:43:03.225198 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:43:03.225315 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:43:03.234386 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:43:03.237251 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:43:03.245485 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:43:03.256172 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:43:03.267852 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:43:03.276082 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:43:03.287236 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:43:03.293644 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:43:03.293877 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:43:03.294276 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:43:03.294412 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:43:03.295146 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:43:03.295610 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:43:03.296029 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:43:03.296663 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:43:03.297141 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:43:03.297612 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:43:03.298071 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:43:03.298507 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:43:03.298953 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:43:03.299485 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:43:03.299913 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:43:03.300049 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:43:03.300871 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:43:03.301351 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:43:03.301739 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:43:03.340436 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:43:03.391422 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:43:03.391633 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:43:03.400194 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:43:03.400405 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:43:03.410915 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:43:03.411131 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:43:03.416370 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 4 17:43:03.416526 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 17:43:03.435170 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:43:03.441169 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:43:03.443742 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:43:03.443918 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:43:03.450556 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:43:03.450663 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:43:03.464652 ignition[1139]: INFO : Ignition 2.19.0 Sep 4 17:43:03.464652 ignition[1139]: INFO : Stage: umount Sep 4 17:43:03.464652 ignition[1139]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:43:03.464652 ignition[1139]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:43:03.484330 ignition[1139]: INFO : umount: umount passed Sep 4 17:43:03.484330 ignition[1139]: INFO : Ignition finished successfully Sep 4 17:43:03.466329 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:43:03.466452 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:43:03.478211 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:43:03.478314 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:43:03.499858 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:43:03.502457 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:43:03.502618 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:43:03.502664 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:43:03.503033 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 17:43:03.503069 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 17:43:03.503453 systemd[1]: Stopped target network.target - Network. Sep 4 17:43:03.503879 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:43:03.503917 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:43:03.504902 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:43:03.505327 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:43:03.522734 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:43:03.533338 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:43:03.538136 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:43:03.543401 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:43:03.543445 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:43:03.551428 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:43:03.554380 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:43:03.559804 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:43:03.559870 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:43:03.565301 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:43:03.569209 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:43:03.591082 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:43:03.601627 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:43:03.608473 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:43:03.608983 systemd-networkd[900]: eth0: DHCPv6 lease lost Sep 4 17:43:03.613474 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:43:03.616054 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:43:03.622861 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:43:03.622923 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:43:03.644140 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:43:03.649147 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:43:03.649230 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:43:03.659233 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:43:03.663626 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:43:03.664955 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:43:03.680289 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:43:03.680463 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:43:03.685318 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:43:03.685387 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:43:03.696985 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:43:03.697028 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:43:03.699812 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:43:03.699861 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:43:03.711157 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:43:03.713844 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:43:03.721395 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:43:03.721465 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:43:03.735679 kernel: hv_netvsc 6045bdd0-ee91-6045-bdd0-ee916045bdd0 eth0: Data path switched from VF: enP5022s1 Sep 4 17:43:03.738148 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:43:03.741297 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:43:03.744241 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:43:03.749888 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:43:03.750015 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:43:03.761027 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:43:03.761087 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:43:03.770719 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 17:43:03.770779 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:43:03.780901 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:43:03.780980 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:43:03.786762 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:43:03.786815 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:43:03.799216 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:43:03.799274 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:43:03.805343 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:43:03.805448 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:43:03.811304 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:43:03.813750 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:43:04.447396 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:43:04.447554 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:43:04.448028 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:43:04.448263 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:43:04.448323 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:43:04.469737 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:43:04.478053 systemd[1]: Switching root. Sep 4 17:43:04.724546 systemd-journald[176]: Journal stopped Sep 4 17:42:54.080908 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:54:07 -00 2024 Sep 4 17:42:54.080945 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:42:54.080960 kernel: BIOS-provided physical RAM map: Sep 4 17:42:54.080971 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 17:42:54.080981 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 4 17:42:54.080992 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Sep 4 17:42:54.081005 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Sep 4 17:42:54.081020 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Sep 4 17:42:54.081031 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Sep 4 17:42:54.081042 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 4 17:42:54.081054 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 4 17:42:54.081065 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 4 17:42:54.081076 kernel: printk: bootconsole [earlyser0] enabled Sep 4 17:42:54.081088 kernel: NX (Execute Disable) protection: active Sep 4 17:42:54.081105 kernel: APIC: Static calls initialized Sep 4 17:42:54.081118 kernel: efi: EFI v2.7 by Microsoft Sep 4 17:42:54.081131 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Sep 4 17:42:54.081143 kernel: SMBIOS 3.1.0 present. Sep 4 17:42:54.081156 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Sep 4 17:42:54.081188 kernel: Hypervisor detected: Microsoft Hyper-V Sep 4 17:42:54.081201 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Sep 4 17:42:54.081213 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Sep 4 17:42:54.081225 kernel: Hyper-V: Nested features: 0x1e0101 Sep 4 17:42:54.081238 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 4 17:42:54.081254 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 4 17:42:54.081266 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 4 17:42:54.081279 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 4 17:42:54.081293 kernel: tsc: Marking TSC unstable due to running on Hyper-V Sep 4 17:42:54.081306 kernel: tsc: Detected 2593.908 MHz processor Sep 4 17:42:54.081318 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 17:42:54.081331 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 17:42:54.081344 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Sep 4 17:42:54.081357 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 4 17:42:54.081372 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 17:42:54.081385 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Sep 4 17:42:54.081398 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Sep 4 17:42:54.081410 kernel: Using GB pages for direct mapping Sep 4 17:42:54.081423 kernel: Secure boot disabled Sep 4 17:42:54.081436 kernel: ACPI: Early table checksum verification disabled Sep 4 17:42:54.081448 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 4 17:42:54.081466 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:42:54.081483 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:42:54.081496 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Sep 4 17:42:54.081510 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 4 17:42:54.081524 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:42:54.081537 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:42:54.081551 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:42:54.081567 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:42:54.081581 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:42:54.081594 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:42:54.081608 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:42:54.081621 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 4 17:42:54.081635 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Sep 4 17:42:54.081649 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 4 17:42:54.081663 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 4 17:42:54.081679 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 4 17:42:54.081692 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 4 17:42:54.081706 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Sep 4 17:42:54.081720 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Sep 4 17:42:54.081734 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 4 17:42:54.081747 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Sep 4 17:42:54.081761 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 4 17:42:54.081774 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 4 17:42:54.081788 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Sep 4 17:42:54.081804 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Sep 4 17:42:54.081818 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Sep 4 17:42:54.081832 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Sep 4 17:42:54.081845 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Sep 4 17:42:54.081859 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Sep 4 17:42:54.081872 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Sep 4 17:42:54.081886 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Sep 4 17:42:54.081900 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Sep 4 17:42:54.081913 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Sep 4 17:42:54.081929 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Sep 4 17:42:54.081943 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Sep 4 17:42:54.081957 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Sep 4 17:42:54.081970 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Sep 4 17:42:54.081984 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Sep 4 17:42:54.081998 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Sep 4 17:42:54.082011 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Sep 4 17:42:54.082025 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Sep 4 17:42:54.082038 kernel: Zone ranges: Sep 4 17:42:54.082055 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 17:42:54.082068 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 4 17:42:54.082082 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 4 17:42:54.082096 kernel: Movable zone start for each node Sep 4 17:42:54.082109 kernel: Early memory node ranges Sep 4 17:42:54.082123 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 4 17:42:54.082136 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Sep 4 17:42:54.082150 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 4 17:42:54.082163 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 4 17:42:54.082187 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 4 17:42:54.082200 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:42:54.082214 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 4 17:42:54.082227 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Sep 4 17:42:54.082241 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 4 17:42:54.082254 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Sep 4 17:42:54.082268 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Sep 4 17:42:54.082282 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 17:42:54.082295 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 17:42:54.082312 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 4 17:42:54.082326 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 4 17:42:54.082339 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 4 17:42:54.082353 kernel: Booting paravirtualized kernel on Hyper-V Sep 4 17:42:54.082367 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 17:42:54.082381 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 17:42:54.082394 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Sep 4 17:42:54.082408 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Sep 4 17:42:54.082421 kernel: pcpu-alloc: [0] 0 1 Sep 4 17:42:54.082437 kernel: Hyper-V: PV spinlocks enabled Sep 4 17:42:54.082450 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 17:42:54.082466 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:42:54.082480 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:42:54.082493 kernel: random: crng init done Sep 4 17:42:54.082506 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 4 17:42:54.082520 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:42:54.082534 kernel: Fallback order for Node 0: 0 Sep 4 17:42:54.082550 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Sep 4 17:42:54.082574 kernel: Policy zone: Normal Sep 4 17:42:54.082590 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:42:54.082605 kernel: software IO TLB: area num 2. Sep 4 17:42:54.082619 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2304K rwdata, 22708K rodata, 42704K init, 2488K bss, 310124K reserved, 0K cma-reserved) Sep 4 17:42:54.082634 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 17:42:54.082648 kernel: ftrace: allocating 37748 entries in 148 pages Sep 4 17:42:54.082663 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 17:42:54.082677 kernel: Dynamic Preempt: voluntary Sep 4 17:42:54.082691 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:42:54.082707 kernel: rcu: RCU event tracing is enabled. Sep 4 17:42:54.082724 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 17:42:54.082739 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:42:54.082754 kernel: Rude variant of Tasks RCU enabled. Sep 4 17:42:54.082768 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:42:54.082783 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:42:54.082800 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 17:42:54.082814 kernel: Using NULL legacy PIC Sep 4 17:42:54.082829 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 4 17:42:54.082843 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:42:54.082858 kernel: Console: colour dummy device 80x25 Sep 4 17:42:54.082872 kernel: printk: console [tty1] enabled Sep 4 17:42:54.082887 kernel: printk: console [ttyS0] enabled Sep 4 17:42:54.082901 kernel: printk: bootconsole [earlyser0] disabled Sep 4 17:42:54.082915 kernel: ACPI: Core revision 20230628 Sep 4 17:42:54.082930 kernel: Failed to register legacy timer interrupt Sep 4 17:42:54.082947 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 17:42:54.082962 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 4 17:42:54.082976 kernel: Hyper-V: Using IPI hypercalls Sep 4 17:42:54.082991 kernel: APIC: send_IPI() replaced with hv_send_ipi() Sep 4 17:42:54.083005 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Sep 4 17:42:54.083020 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Sep 4 17:42:54.083035 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Sep 4 17:42:54.083049 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Sep 4 17:42:54.083064 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Sep 4 17:42:54.083082 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593908) Sep 4 17:42:54.083096 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 4 17:42:54.083111 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Sep 4 17:42:54.083125 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 17:42:54.083140 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 17:42:54.083154 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 17:42:54.083174 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 4 17:42:54.083190 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 4 17:42:54.083204 kernel: RETBleed: Vulnerable Sep 4 17:42:54.083221 kernel: Speculative Store Bypass: Vulnerable Sep 4 17:42:54.083236 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 17:42:54.083250 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 17:42:54.083264 kernel: GDS: Unknown: Dependent on hypervisor status Sep 4 17:42:54.083279 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 17:42:54.083293 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 17:42:54.083308 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 17:42:54.083322 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 4 17:42:54.083337 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 4 17:42:54.083351 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 4 17:42:54.083365 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 17:42:54.083383 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 4 17:42:54.083397 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 4 17:42:54.083411 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 4 17:42:54.083426 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Sep 4 17:42:54.083440 kernel: Freeing SMP alternatives memory: 32K Sep 4 17:42:54.083454 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:42:54.083468 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 17:42:54.083483 kernel: landlock: Up and running. Sep 4 17:42:54.083497 kernel: SELinux: Initializing. Sep 4 17:42:54.083511 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 17:42:54.083526 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 17:42:54.083541 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 4 17:42:54.083558 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:42:54.083572 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:42:54.083587 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:42:54.083601 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 4 17:42:54.083616 kernel: signal: max sigframe size: 3632 Sep 4 17:42:54.083631 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:42:54.083645 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:42:54.083660 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 4 17:42:54.083674 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:42:54.083691 kernel: smpboot: x86: Booting SMP configuration: Sep 4 17:42:54.083706 kernel: .... node #0, CPUs: #1 Sep 4 17:42:54.083721 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Sep 4 17:42:54.083736 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 4 17:42:54.083750 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 17:42:54.083765 kernel: smpboot: Max logical packages: 1 Sep 4 17:42:54.083779 kernel: smpboot: Total of 2 processors activated (10375.63 BogoMIPS) Sep 4 17:42:54.083794 kernel: devtmpfs: initialized Sep 4 17:42:54.083811 kernel: x86/mm: Memory block size: 128MB Sep 4 17:42:54.083825 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 4 17:42:54.083840 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:42:54.083855 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 17:42:54.083869 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:42:54.083884 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:42:54.083898 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:42:54.083913 kernel: audit: type=2000 audit(1725471772.028:1): state=initialized audit_enabled=0 res=1 Sep 4 17:42:54.083927 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:42:54.083944 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 17:42:54.083959 kernel: cpuidle: using governor menu Sep 4 17:42:54.083973 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:42:54.083987 kernel: dca service started, version 1.12.1 Sep 4 17:42:54.084002 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Sep 4 17:42:54.084017 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 17:42:54.084031 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:42:54.084046 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:42:54.084060 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:42:54.084078 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:42:54.084092 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:42:54.084107 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:42:54.084121 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:42:54.084135 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:42:54.084150 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:42:54.084165 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 17:42:54.084186 kernel: ACPI: Interpreter enabled Sep 4 17:42:54.084201 kernel: ACPI: PM: (supports S0 S5) Sep 4 17:42:54.084218 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 17:42:54.084233 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 17:42:54.084248 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 4 17:42:54.084262 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 4 17:42:54.084276 kernel: iommu: Default domain type: Translated Sep 4 17:42:54.084291 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 17:42:54.084306 kernel: efivars: Registered efivars operations Sep 4 17:42:54.084320 kernel: PCI: Using ACPI for IRQ routing Sep 4 17:42:54.084334 kernel: PCI: System does not support PCI Sep 4 17:42:54.084348 kernel: vgaarb: loaded Sep 4 17:42:54.084361 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Sep 4 17:42:54.084375 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:42:54.084387 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:42:54.084400 kernel: pnp: PnP ACPI init Sep 4 17:42:54.084414 kernel: pnp: PnP ACPI: found 3 devices Sep 4 17:42:54.084428 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 17:42:54.084443 kernel: NET: Registered PF_INET protocol family Sep 4 17:42:54.084457 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 17:42:54.084475 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 4 17:42:54.084490 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:42:54.084504 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:42:54.084519 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 4 17:42:54.084534 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 4 17:42:54.084548 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 17:42:54.084563 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 17:42:54.084577 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:42:54.084592 kernel: NET: Registered PF_XDP protocol family Sep 4 17:42:54.084610 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:42:54.084625 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 4 17:42:54.084640 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Sep 4 17:42:54.084654 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 4 17:42:54.084668 kernel: Initialise system trusted keyrings Sep 4 17:42:54.084682 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 4 17:42:54.084696 kernel: Key type asymmetric registered Sep 4 17:42:54.084711 kernel: Asymmetric key parser 'x509' registered Sep 4 17:42:54.084725 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 17:42:54.084743 kernel: io scheduler mq-deadline registered Sep 4 17:42:54.084757 kernel: io scheduler kyber registered Sep 4 17:42:54.084771 kernel: io scheduler bfq registered Sep 4 17:42:54.084786 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 17:42:54.084800 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:42:54.084815 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 17:42:54.084830 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 4 17:42:54.084844 kernel: i8042: PNP: No PS/2 controller found. Sep 4 17:42:54.085035 kernel: rtc_cmos 00:02: registered as rtc0 Sep 4 17:42:54.085230 kernel: rtc_cmos 00:02: setting system clock to 2024-09-04T17:42:53 UTC (1725471773) Sep 4 17:42:54.085371 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 4 17:42:54.085392 kernel: intel_pstate: CPU model not supported Sep 4 17:42:54.085407 kernel: efifb: probing for efifb Sep 4 17:42:54.085421 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 4 17:42:54.085434 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 4 17:42:54.085448 kernel: efifb: scrolling: redraw Sep 4 17:42:54.085462 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 4 17:42:54.085480 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 17:42:54.085494 kernel: fb0: EFI VGA frame buffer device Sep 4 17:42:54.085508 kernel: pstore: Using crash dump compression: deflate Sep 4 17:42:54.085521 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 17:42:54.085535 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:42:54.085549 kernel: Segment Routing with IPv6 Sep 4 17:42:54.085563 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:42:54.085577 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:42:54.085591 kernel: Key type dns_resolver registered Sep 4 17:42:54.085607 kernel: IPI shorthand broadcast: enabled Sep 4 17:42:54.085621 kernel: sched_clock: Marking stable (849003000, 48825900)->(1116068200, -218239300) Sep 4 17:42:54.085635 kernel: registered taskstats version 1 Sep 4 17:42:54.085648 kernel: Loading compiled-in X.509 certificates Sep 4 17:42:54.085662 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 8669771ab5e11f458b79e6634fe685dacc266b18' Sep 4 17:42:54.085677 kernel: Key type .fscrypt registered Sep 4 17:42:54.085691 kernel: Key type fscrypt-provisioning registered Sep 4 17:42:54.085707 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:42:54.085726 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:42:54.085741 kernel: ima: No architecture policies found Sep 4 17:42:54.085755 kernel: clk: Disabling unused clocks Sep 4 17:42:54.085769 kernel: Freeing unused kernel image (initmem) memory: 42704K Sep 4 17:42:54.085783 kernel: Write protecting the kernel read-only data: 36864k Sep 4 17:42:54.085796 kernel: Freeing unused kernel image (rodata/data gap) memory: 1868K Sep 4 17:42:54.085810 kernel: Run /init as init process Sep 4 17:42:54.085823 kernel: with arguments: Sep 4 17:42:54.085836 kernel: /init Sep 4 17:42:54.085850 kernel: with environment: Sep 4 17:42:54.085866 kernel: HOME=/ Sep 4 17:42:54.085879 kernel: TERM=linux Sep 4 17:42:54.085892 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:42:54.085909 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:42:54.085925 systemd[1]: Detected virtualization microsoft. Sep 4 17:42:54.085940 systemd[1]: Detected architecture x86-64. Sep 4 17:42:54.085953 systemd[1]: Running in initrd. Sep 4 17:42:54.085970 systemd[1]: No hostname configured, using default hostname. Sep 4 17:42:54.085985 systemd[1]: Hostname set to . Sep 4 17:42:54.085999 systemd[1]: Initializing machine ID from random generator. Sep 4 17:42:54.086013 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:42:54.086028 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:42:54.086043 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:42:54.086059 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:42:54.086073 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:42:54.086091 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:42:54.086106 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:42:54.086122 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:42:54.086137 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:42:54.086152 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:42:54.086205 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:42:54.086221 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:42:54.086240 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:42:54.086254 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:42:54.086269 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:42:54.086284 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:42:54.086298 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:42:54.086313 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:42:54.086329 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:42:54.086344 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:42:54.086359 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:42:54.086377 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:42:54.086392 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:42:54.086407 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:42:54.086422 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:42:54.086437 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:42:54.086451 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:42:54.086467 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:42:54.086482 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:42:54.086500 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:42:54.086542 systemd-journald[176]: Collecting audit messages is disabled. Sep 4 17:42:54.086576 systemd-journald[176]: Journal started Sep 4 17:42:54.086611 systemd-journald[176]: Runtime Journal (/run/log/journal/1de929accbee4764b6ec59fbdac3c880) is 8.0M, max 158.8M, 150.8M free. Sep 4 17:42:54.097040 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:42:54.096699 systemd-modules-load[177]: Inserted module 'overlay' Sep 4 17:42:54.098508 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:42:54.101878 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:42:54.105595 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:42:54.132446 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:42:54.138904 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:42:54.138933 kernel: Bridge firewalling registered Sep 4 17:42:54.142433 systemd-modules-load[177]: Inserted module 'br_netfilter' Sep 4 17:42:54.151395 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:42:54.158684 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:42:54.161878 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:42:54.162136 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:42:54.165423 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:42:54.177113 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:42:54.187332 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:42:54.205848 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:42:54.214292 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:42:54.220479 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:42:54.227512 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:42:54.235450 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:42:54.243342 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:42:54.255964 dracut-cmdline[212]: dracut-dracut-053 Sep 4 17:42:54.258949 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:42:54.302157 systemd-resolved[217]: Positive Trust Anchors: Sep 4 17:42:54.302291 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:42:54.302346 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:42:54.328419 systemd-resolved[217]: Defaulting to hostname 'linux'. Sep 4 17:42:54.331792 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:42:54.338311 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:42:54.351184 kernel: SCSI subsystem initialized Sep 4 17:42:54.361191 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:42:54.372201 kernel: iscsi: registered transport (tcp) Sep 4 17:42:54.393821 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:42:54.393907 kernel: QLogic iSCSI HBA Driver Sep 4 17:42:54.429861 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:42:54.439484 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:42:54.466573 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:42:54.466667 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:42:54.470235 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:42:54.511196 kernel: raid6: avx512x4 gen() 18675 MB/s Sep 4 17:42:54.531187 kernel: raid6: avx512x2 gen() 18701 MB/s Sep 4 17:42:54.550179 kernel: raid6: avx512x1 gen() 18716 MB/s Sep 4 17:42:54.569180 kernel: raid6: avx2x4 gen() 18694 MB/s Sep 4 17:42:54.588185 kernel: raid6: avx2x2 gen() 18682 MB/s Sep 4 17:42:54.608289 kernel: raid6: avx2x1 gen() 14241 MB/s Sep 4 17:42:54.608325 kernel: raid6: using algorithm avx512x1 gen() 18716 MB/s Sep 4 17:42:54.629217 kernel: raid6: .... xor() 25532 MB/s, rmw enabled Sep 4 17:42:54.629248 kernel: raid6: using avx512x2 recovery algorithm Sep 4 17:42:54.652201 kernel: xor: automatically using best checksumming function avx Sep 4 17:42:54.805196 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:42:54.814594 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:42:54.823346 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:42:54.842718 systemd-udevd[397]: Using default interface naming scheme 'v255'. Sep 4 17:42:54.849606 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:42:54.861378 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:42:54.875942 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Sep 4 17:42:54.905505 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:42:54.913453 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:42:54.954115 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:42:54.967789 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:42:55.003843 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:42:55.012705 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:42:55.016101 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:42:55.019664 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:42:55.032657 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:42:55.060196 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 17:42:55.064483 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:42:55.073853 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:42:55.074065 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:42:55.084586 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:42:55.087754 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:42:55.105259 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 17:42:55.088029 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:42:55.111982 kernel: AES CTR mode by8 optimization enabled Sep 4 17:42:55.091740 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:42:55.106488 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:42:55.116846 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:42:55.131205 kernel: hv_vmbus: Vmbus version:5.2 Sep 4 17:42:55.116930 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:42:55.135628 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:42:55.164467 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 17:42:55.164521 kernel: hv_vmbus: registering driver hid_hyperv Sep 4 17:42:55.171608 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Sep 4 17:42:55.181015 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 4 17:42:55.181073 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 4 17:42:55.182487 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:42:55.193474 kernel: hv_vmbus: registering driver hv_netvsc Sep 4 17:42:55.201188 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 4 17:42:55.201235 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 4 17:42:55.201248 kernel: hv_vmbus: registering driver hv_storvsc Sep 4 17:42:55.201480 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:42:55.216189 kernel: scsi host0: storvsc_host_t Sep 4 17:42:55.216256 kernel: scsi host1: storvsc_host_t Sep 4 17:42:55.224077 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Sep 4 17:42:55.224669 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 4 17:42:55.234187 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 4 17:42:55.234943 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:42:55.254193 kernel: PTP clock support registered Sep 4 17:42:55.267088 kernel: hv_utils: Registering HyperV Utility Driver Sep 4 17:42:55.267158 kernel: hv_vmbus: registering driver hv_utils Sep 4 17:42:55.273639 kernel: hv_utils: Shutdown IC version 3.2 Sep 4 17:42:55.273712 kernel: hv_utils: Heartbeat IC version 3.0 Sep 4 17:42:55.275795 kernel: hv_utils: TimeSync IC version 4.0 Sep 4 17:42:55.938619 systemd-resolved[217]: Clock change detected. Flushing caches. Sep 4 17:42:55.948231 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 4 17:42:55.948498 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 17:42:55.949951 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 4 17:42:55.965954 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 4 17:42:55.966230 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 4 17:42:55.972647 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 4 17:42:55.972926 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 4 17:42:55.973148 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 4 17:42:55.979948 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:42:55.979994 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 4 17:42:56.084602 kernel: hv_netvsc 6045bdd0-ee91-6045-bdd0-ee916045bdd0 eth0: VF slot 1 added Sep 4 17:42:56.095980 kernel: hv_vmbus: registering driver hv_pci Sep 4 17:42:56.096052 kernel: hv_pci 45ba88dd-139e-4a44-8beb-163699efff2c: PCI VMBus probing: Using version 0x10004 Sep 4 17:42:56.103612 kernel: hv_pci 45ba88dd-139e-4a44-8beb-163699efff2c: PCI host bridge to bus 139e:00 Sep 4 17:42:56.103897 kernel: pci_bus 139e:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Sep 4 17:42:56.106855 kernel: pci_bus 139e:00: No busn resource found for root bus, will use [bus 00-ff] Sep 4 17:42:56.111076 kernel: pci 139e:00:02.0: [15b3:1016] type 00 class 0x020000 Sep 4 17:42:56.115978 kernel: pci 139e:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 4 17:42:56.119945 kernel: pci 139e:00:02.0: enabling Extended Tags Sep 4 17:42:56.130187 kernel: pci 139e:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 139e:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Sep 4 17:42:56.136446 kernel: pci_bus 139e:00: busn_res: [bus 00-ff] end is updated to 00 Sep 4 17:42:56.136667 kernel: pci 139e:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 4 17:42:56.312066 kernel: mlx5_core 139e:00:02.0: enabling device (0000 -> 0002) Sep 4 17:42:56.315964 kernel: mlx5_core 139e:00:02.0: firmware version: 14.30.1284 Sep 4 17:42:56.529127 kernel: hv_netvsc 6045bdd0-ee91-6045-bdd0-ee916045bdd0 eth0: VF registering: eth1 Sep 4 17:42:56.529466 kernel: mlx5_core 139e:00:02.0 eth1: joined to eth0 Sep 4 17:42:56.534999 kernel: mlx5_core 139e:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 4 17:42:56.546962 kernel: mlx5_core 139e:00:02.0 enP5022s1: renamed from eth1 Sep 4 17:42:56.553105 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 4 17:42:56.668967 kernel: BTRFS: device fsid 0dc40443-7f77-4fa7-b5e4-579d4bba0772 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (444) Sep 4 17:42:56.677278 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 4 17:42:56.691686 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 4 17:42:56.691825 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 4 17:42:56.711204 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:42:56.722951 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (457) Sep 4 17:42:56.749325 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 4 17:42:57.740230 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:42:57.740305 disk-uuid[599]: The operation has completed successfully. Sep 4 17:42:57.824039 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:42:57.824167 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:42:57.851095 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:42:57.856740 sh[716]: Success Sep 4 17:42:57.886959 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 4 17:42:58.090290 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:42:58.105081 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:42:58.107804 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:42:58.129948 kernel: BTRFS info (device dm-0): first mount of filesystem 0dc40443-7f77-4fa7-b5e4-579d4bba0772 Sep 4 17:42:58.129993 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:42:58.135321 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:42:58.138501 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:42:58.141167 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:42:58.461388 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:42:58.467775 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:42:58.477079 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:42:58.485748 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:42:58.504729 kernel: BTRFS info (device sda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:42:58.504784 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:42:58.504811 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:42:58.527367 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:42:58.540174 kernel: BTRFS info (device sda6): last unmount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:42:58.539748 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:42:58.547423 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:42:58.559074 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:42:58.580971 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:42:58.592114 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:42:58.611432 systemd-networkd[900]: lo: Link UP Sep 4 17:42:58.611442 systemd-networkd[900]: lo: Gained carrier Sep 4 17:42:58.613431 systemd-networkd[900]: Enumeration completed Sep 4 17:42:58.613687 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:42:58.616463 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:42:58.616469 systemd-networkd[900]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:42:58.622910 systemd[1]: Reached target network.target - Network. Sep 4 17:42:58.679955 kernel: mlx5_core 139e:00:02.0 enP5022s1: Link up Sep 4 17:42:58.715018 kernel: hv_netvsc 6045bdd0-ee91-6045-bdd0-ee916045bdd0 eth0: Data path switched to VF: enP5022s1 Sep 4 17:42:58.715366 systemd-networkd[900]: enP5022s1: Link UP Sep 4 17:42:58.715724 systemd-networkd[900]: eth0: Link UP Sep 4 17:42:58.715882 systemd-networkd[900]: eth0: Gained carrier Sep 4 17:42:58.715893 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:42:58.729180 systemd-networkd[900]: enP5022s1: Gained carrier Sep 4 17:42:58.747972 systemd-networkd[900]: eth0: DHCPv4 address 10.200.4.10/24, gateway 10.200.4.1 acquired from 168.63.129.16 Sep 4 17:42:59.455148 ignition[870]: Ignition 2.19.0 Sep 4 17:42:59.455160 ignition[870]: Stage: fetch-offline Sep 4 17:42:59.456987 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:42:59.455202 ignition[870]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:42:59.455212 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:42:59.455314 ignition[870]: parsed url from cmdline: "" Sep 4 17:42:59.455318 ignition[870]: no config URL provided Sep 4 17:42:59.455325 ignition[870]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:42:59.455335 ignition[870]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:42:59.455342 ignition[870]: failed to fetch config: resource requires networking Sep 4 17:42:59.455557 ignition[870]: Ignition finished successfully Sep 4 17:42:59.489098 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 17:42:59.502400 ignition[909]: Ignition 2.19.0 Sep 4 17:42:59.502411 ignition[909]: Stage: fetch Sep 4 17:42:59.502638 ignition[909]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:42:59.502651 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:42:59.502742 ignition[909]: parsed url from cmdline: "" Sep 4 17:42:59.502745 ignition[909]: no config URL provided Sep 4 17:42:59.502749 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:42:59.502756 ignition[909]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:42:59.502775 ignition[909]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 4 17:42:59.587471 ignition[909]: GET result: OK Sep 4 17:42:59.587845 ignition[909]: config has been read from IMDS userdata Sep 4 17:42:59.587890 ignition[909]: parsing config with SHA512: b22bfb83cf4a2b275e453933079f684503d030cc5d6f3f6b55dcce808517b4e7329d35094149b21b7eaf7da695a05d71126e709b995734482ae9974b86c6a5c4 Sep 4 17:42:59.595017 unknown[909]: fetched base config from "system" Sep 4 17:42:59.595033 unknown[909]: fetched base config from "system" Sep 4 17:42:59.595043 unknown[909]: fetched user config from "azure" Sep 4 17:42:59.599595 ignition[909]: fetch: fetch complete Sep 4 17:42:59.599602 ignition[909]: fetch: fetch passed Sep 4 17:42:59.603751 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 17:42:59.601382 ignition[909]: Ignition finished successfully Sep 4 17:42:59.620135 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:42:59.639122 ignition[915]: Ignition 2.19.0 Sep 4 17:42:59.639133 ignition[915]: Stage: kargs Sep 4 17:42:59.639349 ignition[915]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:42:59.641268 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:42:59.639362 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:42:59.640244 ignition[915]: kargs: kargs passed Sep 4 17:42:59.640293 ignition[915]: Ignition finished successfully Sep 4 17:42:59.653159 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:42:59.671167 ignition[921]: Ignition 2.19.0 Sep 4 17:42:59.671178 ignition[921]: Stage: disks Sep 4 17:42:59.673205 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:42:59.671405 ignition[921]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:42:59.676549 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:42:59.671419 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:42:59.680810 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:42:59.672279 ignition[921]: disks: disks passed Sep 4 17:42:59.684092 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:42:59.672324 ignition[921]: Ignition finished successfully Sep 4 17:42:59.704769 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:42:59.707173 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:42:59.719095 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:42:59.775234 systemd-networkd[900]: enP5022s1: Gained IPv6LL Sep 4 17:42:59.779077 systemd-fsck[929]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 4 17:42:59.783998 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:42:59.795139 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:42:59.887951 kernel: EXT4-fs (sda9): mounted filesystem bdbe0f61-2675-40b7-b9ae-5653402e9b23 r/w with ordered data mode. Quota mode: none. Sep 4 17:42:59.888133 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:42:59.893109 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:42:59.949053 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:42:59.953594 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:42:59.961509 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 4 17:42:59.980330 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (940) Sep 4 17:42:59.980362 kernel: BTRFS info (device sda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:42:59.980379 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:42:59.980395 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:42:59.985367 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:42:59.977383 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:42:59.977417 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:42:59.987118 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:42:59.997225 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:43:00.009105 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:43:00.159241 systemd-networkd[900]: eth0: Gained IPv6LL Sep 4 17:43:00.670709 coreos-metadata[942]: Sep 04 17:43:00.670 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 4 17:43:00.677204 coreos-metadata[942]: Sep 04 17:43:00.677 INFO Fetch successful Sep 4 17:43:00.677204 coreos-metadata[942]: Sep 04 17:43:00.677 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 4 17:43:00.696347 coreos-metadata[942]: Sep 04 17:43:00.696 INFO Fetch successful Sep 4 17:43:00.728813 coreos-metadata[942]: Sep 04 17:43:00.728 INFO wrote hostname ci-4054.1.0-a-c31d97b133 to /sysroot/etc/hostname Sep 4 17:43:00.731012 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 17:43:00.799550 initrd-setup-root[969]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:43:00.855997 initrd-setup-root[976]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:43:00.862020 initrd-setup-root[983]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:43:00.867659 initrd-setup-root[990]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:43:01.852110 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:43:01.863053 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:43:01.869108 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:43:01.877208 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:43:01.883737 kernel: BTRFS info (device sda6): last unmount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:43:01.902798 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:43:01.914155 ignition[1059]: INFO : Ignition 2.19.0 Sep 4 17:43:01.914155 ignition[1059]: INFO : Stage: mount Sep 4 17:43:01.921596 ignition[1059]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:43:01.921596 ignition[1059]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:43:01.921596 ignition[1059]: INFO : mount: mount passed Sep 4 17:43:01.921596 ignition[1059]: INFO : Ignition finished successfully Sep 4 17:43:01.916080 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:43:01.934945 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:43:01.941711 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:43:01.957961 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1069) Sep 4 17:43:01.958005 kernel: BTRFS info (device sda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:43:01.967122 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:43:01.967176 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:43:01.972950 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:43:01.974185 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:43:01.996911 ignition[1086]: INFO : Ignition 2.19.0 Sep 4 17:43:01.996911 ignition[1086]: INFO : Stage: files Sep 4 17:43:02.001344 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:43:02.001344 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:43:02.001344 ignition[1086]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:43:02.001344 ignition[1086]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:43:02.001344 ignition[1086]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:43:02.075872 ignition[1086]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:43:02.080166 ignition[1086]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:43:02.080166 ignition[1086]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:43:02.076512 unknown[1086]: wrote ssh authorized keys file for user: core Sep 4 17:43:02.157313 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:43:02.162813 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 17:43:02.185836 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:43:02.242103 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:43:02.248179 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:43:02.248179 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:43:02.248179 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:43:02.248179 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:43:02.248179 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:43:02.248179 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:43:02.248179 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:43:02.248179 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:43:02.248179 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:43:02.248179 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:43:02.248179 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:43:02.248179 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:43:02.248179 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:43:02.248179 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Sep 4 17:43:02.744732 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 17:43:03.069378 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:43:03.069378 ignition[1086]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 17:43:03.087600 ignition[1086]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:43:03.095613 ignition[1086]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:43:03.095613 ignition[1086]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 17:43:03.095613 ignition[1086]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:43:03.095613 ignition[1086]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:43:03.095613 ignition[1086]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:43:03.095613 ignition[1086]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:43:03.095613 ignition[1086]: INFO : files: files passed Sep 4 17:43:03.095613 ignition[1086]: INFO : Ignition finished successfully Sep 4 17:43:03.089481 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:43:03.112178 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:43:03.125086 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:43:03.128639 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:43:03.128752 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:43:03.168127 initrd-setup-root-after-ignition[1115]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:43:03.168127 initrd-setup-root-after-ignition[1115]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:43:03.177985 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:43:03.184533 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:43:03.187851 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:43:03.201088 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:43:03.225198 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:43:03.225315 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:43:03.234386 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:43:03.237251 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:43:03.245485 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:43:03.256172 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:43:03.267852 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:43:03.276082 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:43:03.287236 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:43:03.293644 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:43:03.293877 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:43:03.294276 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:43:03.294412 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:43:03.295146 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:43:03.295610 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:43:03.296029 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:43:03.296663 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:43:03.297141 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:43:03.297612 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:43:03.298071 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:43:03.298507 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:43:03.298953 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:43:03.299485 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:43:03.299913 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:43:03.300049 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:43:03.300871 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:43:03.301351 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:43:03.301739 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:43:03.340436 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:43:03.391422 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:43:03.391633 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:43:03.400194 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:43:03.400405 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:43:03.410915 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:43:03.411131 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:43:03.416370 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 4 17:43:03.416526 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 17:43:03.435170 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:43:03.441169 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:43:03.443742 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:43:03.443918 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:43:03.450556 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:43:03.450663 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:43:03.464652 ignition[1139]: INFO : Ignition 2.19.0 Sep 4 17:43:03.464652 ignition[1139]: INFO : Stage: umount Sep 4 17:43:03.464652 ignition[1139]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:43:03.464652 ignition[1139]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:43:03.484330 ignition[1139]: INFO : umount: umount passed Sep 4 17:43:03.484330 ignition[1139]: INFO : Ignition finished successfully Sep 4 17:43:03.466329 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:43:03.466452 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:43:03.478211 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:43:03.478314 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:43:03.499858 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:43:03.502457 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:43:03.502618 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:43:03.502664 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:43:03.503033 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 17:43:03.503069 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 17:43:03.503453 systemd[1]: Stopped target network.target - Network. Sep 4 17:43:03.503879 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:43:03.503917 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:43:03.504902 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:43:03.505327 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:43:03.522734 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:43:03.533338 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:43:03.538136 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:43:03.543401 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:43:03.543445 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:43:03.551428 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:43:03.554380 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:43:03.559804 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:43:03.559870 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:43:03.565301 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:43:03.569209 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:43:03.591082 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:43:03.601627 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:43:03.608473 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:43:03.608983 systemd-networkd[900]: eth0: DHCPv6 lease lost Sep 4 17:43:03.613474 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:43:03.616054 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:43:03.622861 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:43:03.622923 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:43:03.644140 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:43:03.649147 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:43:03.649230 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:43:03.659233 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:43:03.663626 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:43:03.664955 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:43:03.680289 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:43:03.680463 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:43:03.685318 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:43:03.685387 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:43:03.696985 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:43:03.697028 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:43:03.699812 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:43:03.699861 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:43:03.711157 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:43:03.713844 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:43:03.721395 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:43:03.721465 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:43:03.735679 kernel: hv_netvsc 6045bdd0-ee91-6045-bdd0-ee916045bdd0 eth0: Data path switched from VF: enP5022s1 Sep 4 17:43:03.738148 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:43:03.741297 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:43:03.744241 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:43:03.749888 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:43:03.750015 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:43:03.761027 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:43:03.761087 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:43:03.770719 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 17:43:03.770779 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:43:03.780901 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:43:03.780980 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:43:03.786762 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:43:03.786815 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:43:03.799216 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:43:03.799274 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:43:03.805343 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:43:03.805448 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:43:03.811304 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:43:03.813750 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:43:04.447396 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:43:04.447554 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:43:04.448028 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:43:04.448263 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:43:04.448323 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:43:04.469737 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:43:04.478053 systemd[1]: Switching root. Sep 4 17:43:04.724546 systemd-journald[176]: Journal stopped Sep 4 17:43:10.849007 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Sep 4 17:43:10.849055 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:43:10.849073 kernel: SELinux: policy capability open_perms=1 Sep 4 17:43:10.849088 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:43:10.849100 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:43:10.849114 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:43:10.849129 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:43:10.849147 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:43:10.849161 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:43:10.849174 kernel: audit: type=1403 audit(1725471785.925:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:43:10.849190 systemd[1]: Successfully loaded SELinux policy in 126.911ms. Sep 4 17:43:10.849206 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.430ms. Sep 4 17:43:10.849223 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:43:10.849238 systemd[1]: Detected virtualization microsoft. Sep 4 17:43:10.849256 systemd[1]: Detected architecture x86-64. Sep 4 17:43:10.849277 systemd[1]: Detected first boot. Sep 4 17:43:10.849294 systemd[1]: Hostname set to . Sep 4 17:43:10.849307 systemd[1]: Initializing machine ID from random generator. Sep 4 17:43:10.849318 zram_generator::config[1180]: No configuration found. Sep 4 17:43:10.849331 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:43:10.849341 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:43:10.849350 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:43:10.849360 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:43:10.849370 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:43:10.849379 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:43:10.849389 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:43:10.849405 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:43:10.849422 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:43:10.849439 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:43:10.849455 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:43:10.849473 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:43:10.849489 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:43:10.849506 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:43:10.849523 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:43:10.849545 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:43:10.849564 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:43:10.849583 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:43:10.849601 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 17:43:10.849619 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:43:10.849637 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:43:10.849659 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:43:10.849678 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:43:10.849700 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:43:10.849719 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:43:10.849737 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:43:10.849755 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:43:10.849774 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:43:10.849792 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:43:10.849811 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:43:10.849829 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:43:10.849847 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:43:10.849865 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:43:10.849882 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:43:10.849898 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:43:10.849917 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:43:10.849946 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:43:10.849963 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:43:10.849980 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:43:10.849996 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:43:10.850012 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:43:10.850028 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:43:10.850043 systemd[1]: Reached target machines.target - Containers. Sep 4 17:43:10.850063 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:43:10.850078 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:43:10.850094 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:43:10.850112 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:43:10.850128 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:43:10.850144 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:43:10.850160 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:43:10.850178 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:43:10.850193 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:43:10.850212 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:43:10.850228 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:43:10.854711 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:43:10.854746 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:43:10.854764 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:43:10.854783 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:43:10.854802 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:43:10.854823 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:43:10.854846 kernel: loop: module loaded Sep 4 17:43:10.854862 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:43:10.854879 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:43:10.854896 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:43:10.854914 systemd[1]: Stopped verity-setup.service. Sep 4 17:43:10.854953 kernel: fuse: init (API version 7.39) Sep 4 17:43:10.854969 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:43:10.855018 systemd-journald[1261]: Collecting audit messages is disabled. Sep 4 17:43:10.855064 systemd-journald[1261]: Journal started Sep 4 17:43:10.855096 systemd-journald[1261]: Runtime Journal (/run/log/journal/fccd914e9db6407397302a236dba29b2) is 8.0M, max 158.8M, 150.8M free. Sep 4 17:43:10.154078 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:43:10.274973 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 4 17:43:10.275339 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:43:10.881438 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:43:10.881493 kernel: ACPI: bus type drm_connector registered Sep 4 17:43:10.868611 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:43:10.871791 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:43:10.874844 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:43:10.877526 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:43:10.888239 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:43:10.892445 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:43:10.895159 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:43:10.898880 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:43:10.899074 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:43:10.902595 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:43:10.905783 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:43:10.906122 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:43:10.909305 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:43:10.909464 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:43:10.912478 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:43:10.912721 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:43:10.916149 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:43:10.916302 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:43:10.919477 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:43:10.919627 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:43:10.923293 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:43:10.927093 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:43:10.930706 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:43:10.944597 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:43:10.954104 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:43:10.961000 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:43:10.964288 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:43:10.964334 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:43:10.968929 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:43:10.982524 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:43:10.991094 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:43:10.994443 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:43:11.022132 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:43:11.026326 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:43:11.029919 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:43:11.031513 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:43:11.034824 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:43:11.039443 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:43:11.055104 systemd-journald[1261]: Time spent on flushing to /var/log/journal/fccd914e9db6407397302a236dba29b2 is 35.905ms for 954 entries. Sep 4 17:43:11.055104 systemd-journald[1261]: System Journal (/var/log/journal/fccd914e9db6407397302a236dba29b2) is 8.0M, max 2.6G, 2.6G free. Sep 4 17:43:11.131054 systemd-journald[1261]: Received client request to flush runtime journal. Sep 4 17:43:11.131108 kernel: loop0: detected capacity change from 0 to 61888 Sep 4 17:43:11.047057 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:43:11.052098 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:43:11.061975 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:43:11.065789 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:43:11.074124 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:43:11.077979 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:43:11.093241 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:43:11.101802 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:43:11.109396 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:43:11.124127 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:43:11.131020 udevadm[1322]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 4 17:43:11.133438 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:43:11.175903 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:43:11.177232 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:43:11.191609 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:43:11.230093 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Sep 4 17:43:11.230122 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Sep 4 17:43:11.237067 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:43:11.246269 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:43:11.382386 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:43:11.391135 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:43:11.410846 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Sep 4 17:43:11.411245 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Sep 4 17:43:11.417630 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:43:11.529026 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:43:11.587963 kernel: loop1: detected capacity change from 0 to 211296 Sep 4 17:43:11.658964 kernel: loop2: detected capacity change from 0 to 89336 Sep 4 17:43:12.057440 kernel: loop3: detected capacity change from 0 to 140728 Sep 4 17:43:12.438641 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:43:12.447181 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:43:12.478754 systemd-udevd[1343]: Using default interface naming scheme 'v255'. Sep 4 17:43:12.574960 kernel: loop4: detected capacity change from 0 to 61888 Sep 4 17:43:12.584956 kernel: loop5: detected capacity change from 0 to 211296 Sep 4 17:43:12.595957 kernel: loop6: detected capacity change from 0 to 89336 Sep 4 17:43:12.605956 kernel: loop7: detected capacity change from 0 to 140728 Sep 4 17:43:12.615684 (sd-merge)[1345]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 4 17:43:12.616235 (sd-merge)[1345]: Merged extensions into '/usr'. Sep 4 17:43:12.645439 systemd[1]: Reloading requested from client PID 1315 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:43:12.645458 systemd[1]: Reloading... Sep 4 17:43:12.697064 zram_generator::config[1366]: No configuration found. Sep 4 17:43:12.811773 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1399) Sep 4 17:43:12.838126 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1399) Sep 4 17:43:12.942962 kernel: hv_vmbus: registering driver hv_balloon Sep 4 17:43:12.943060 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 4 17:43:12.960219 kernel: hv_vmbus: registering driver hyperv_fb Sep 4 17:43:12.967960 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 4 17:43:12.976101 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 4 17:43:12.985195 kernel: Console: switching to colour dummy device 80x25 Sep 4 17:43:12.991952 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 17:43:13.013830 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 17:43:13.030216 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:43:13.160955 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1408) Sep 4 17:43:13.178288 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 17:43:13.178603 systemd[1]: Reloading finished in 532 ms. Sep 4 17:43:13.212388 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:43:13.230805 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:43:13.339144 systemd[1]: Starting ensure-sysext.service... Sep 4 17:43:13.350143 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:43:13.370157 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:43:13.375359 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:43:13.413222 systemd[1]: Reloading requested from client PID 1494 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:43:13.413420 systemd[1]: Reloading... Sep 4 17:43:13.439917 systemd-tmpfiles[1498]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:43:13.440438 systemd-tmpfiles[1498]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:43:13.456709 systemd-tmpfiles[1498]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:43:13.457147 systemd-tmpfiles[1498]: ACLs are not supported, ignoring. Sep 4 17:43:13.457223 systemd-tmpfiles[1498]: ACLs are not supported, ignoring. Sep 4 17:43:13.465847 systemd-tmpfiles[1498]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:43:13.465860 systemd-tmpfiles[1498]: Skipping /boot Sep 4 17:43:13.478957 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Sep 4 17:43:13.482417 systemd-tmpfiles[1498]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:43:13.482430 systemd-tmpfiles[1498]: Skipping /boot Sep 4 17:43:13.560961 zram_generator::config[1535]: No configuration found. Sep 4 17:43:13.698899 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:43:13.782854 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 4 17:43:13.783478 systemd[1]: Reloading finished in 369 ms. Sep 4 17:43:13.816563 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:43:13.839217 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:43:13.863196 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:43:13.868256 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:43:13.873847 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:43:13.888987 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:43:13.896255 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:43:13.906520 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:43:13.911291 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:43:13.911484 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:43:13.914732 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:43:13.921240 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:43:13.928260 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:43:13.938411 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:43:13.938789 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:43:13.946842 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:43:13.958369 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:43:13.969356 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:43:13.979648 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:43:13.983571 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:43:13.984324 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:43:13.988060 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:43:13.993819 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:43:13.994016 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:43:14.003662 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:43:14.011611 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:43:14.011796 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:43:14.023143 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:43:14.035030 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:43:14.042471 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:43:14.044013 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:43:14.053311 lvm[1615]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:43:14.060998 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:43:14.061471 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:43:14.070330 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:43:14.082335 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:43:14.093410 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:43:14.101628 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:43:14.101898 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:43:14.109990 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:43:14.110384 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:43:14.120270 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:43:14.123309 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:43:14.123547 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:43:14.128832 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:43:14.130123 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:43:14.140878 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:43:14.141183 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:43:14.147469 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:43:14.147804 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:43:14.152892 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:43:14.153200 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:43:14.157214 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:43:14.161397 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:43:14.161520 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:43:14.166650 systemd[1]: Finished ensure-sysext.service. Sep 4 17:43:14.172834 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:43:14.174734 augenrules[1646]: No rules Sep 4 17:43:14.184835 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:43:14.189054 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:43:14.189460 lvm[1657]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:43:14.189130 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:43:14.189567 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:43:14.212163 systemd-resolved[1602]: Positive Trust Anchors: Sep 4 17:43:14.212179 systemd-resolved[1602]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:43:14.212238 systemd-resolved[1602]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:43:14.229351 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:43:14.236402 systemd-networkd[1497]: lo: Link UP Sep 4 17:43:14.236411 systemd-networkd[1497]: lo: Gained carrier Sep 4 17:43:14.238839 systemd-networkd[1497]: Enumeration completed Sep 4 17:43:14.239081 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:43:14.239282 systemd-networkd[1497]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:43:14.239285 systemd-networkd[1497]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:43:14.246105 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:43:14.252206 systemd-resolved[1602]: Using system hostname 'ci-4054.1.0-a-c31d97b133'. Sep 4 17:43:14.293952 kernel: mlx5_core 139e:00:02.0 enP5022s1: Link up Sep 4 17:43:14.316187 kernel: hv_netvsc 6045bdd0-ee91-6045-bdd0-ee916045bdd0 eth0: Data path switched to VF: enP5022s1 Sep 4 17:43:14.317211 systemd-networkd[1497]: enP5022s1: Link UP Sep 4 17:43:14.317371 systemd-networkd[1497]: eth0: Link UP Sep 4 17:43:14.317384 systemd-networkd[1497]: eth0: Gained carrier Sep 4 17:43:14.317411 systemd-networkd[1497]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:43:14.318570 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:43:14.322158 systemd[1]: Reached target network.target - Network. Sep 4 17:43:14.322341 systemd-networkd[1497]: enP5022s1: Gained carrier Sep 4 17:43:14.325391 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:43:14.357004 systemd-networkd[1497]: eth0: DHCPv4 address 10.200.4.10/24, gateway 10.200.4.1 acquired from 168.63.129.16 Sep 4 17:43:14.617506 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:43:14.621651 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:43:15.519281 systemd-networkd[1497]: enP5022s1: Gained IPv6LL Sep 4 17:43:15.903125 systemd-networkd[1497]: eth0: Gained IPv6LL Sep 4 17:43:15.906541 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:43:15.909911 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:43:18.128670 ldconfig[1310]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:43:18.139327 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:43:18.155115 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:43:18.167136 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:43:18.170416 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:43:18.173446 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:43:18.176797 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:43:18.180420 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:43:18.183334 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:43:18.186638 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:43:18.190161 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:43:18.190191 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:43:18.192582 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:43:18.196450 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:43:18.200578 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:43:18.210777 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:43:18.214306 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:43:18.217644 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:43:18.220291 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:43:18.222783 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:43:18.222818 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:43:18.248095 systemd[1]: Starting chronyd.service - NTP client/server... Sep 4 17:43:18.254070 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:43:18.260559 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 17:43:18.267151 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:43:18.278201 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:43:18.283199 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:43:18.286183 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:43:18.290048 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:43:18.298463 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:43:18.308136 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:43:18.316430 (chronyd)[1669]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Sep 4 17:43:18.318052 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:43:18.320688 jq[1673]: false Sep 4 17:43:18.329556 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:43:18.344189 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:43:18.345749 chronyd[1687]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Sep 4 17:43:18.351214 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:43:18.355626 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:43:18.356283 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:43:18.357179 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:43:18.370331 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:43:18.384009 chronyd[1687]: Timezone right/UTC failed leap second check, ignoring Sep 4 17:43:18.386382 systemd[1]: Started chronyd.service - NTP client/server. Sep 4 17:43:18.384235 chronyd[1687]: Loaded seccomp filter (level 2) Sep 4 17:43:18.389733 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:43:18.389974 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:43:18.414098 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:43:18.414328 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:43:18.424378 extend-filesystems[1674]: Found loop4 Sep 4 17:43:18.424378 extend-filesystems[1674]: Found loop5 Sep 4 17:43:18.424378 extend-filesystems[1674]: Found loop6 Sep 4 17:43:18.424378 extend-filesystems[1674]: Found loop7 Sep 4 17:43:18.424378 extend-filesystems[1674]: Found sda Sep 4 17:43:18.436255 extend-filesystems[1674]: Found sda1 Sep 4 17:43:18.436255 extend-filesystems[1674]: Found sda2 Sep 4 17:43:18.436255 extend-filesystems[1674]: Found sda3 Sep 4 17:43:18.436255 extend-filesystems[1674]: Found usr Sep 4 17:43:18.436255 extend-filesystems[1674]: Found sda4 Sep 4 17:43:18.436255 extend-filesystems[1674]: Found sda6 Sep 4 17:43:18.436255 extend-filesystems[1674]: Found sda7 Sep 4 17:43:18.436255 extend-filesystems[1674]: Found sda9 Sep 4 17:43:18.436255 extend-filesystems[1674]: Checking size of /dev/sda9 Sep 4 17:43:18.462526 dbus-daemon[1672]: [system] SELinux support is enabled Sep 4 17:43:18.445013 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:43:18.470070 jq[1691]: true Sep 4 17:43:18.445244 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:43:18.449291 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:43:18.466006 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:43:18.475962 jq[1713]: true Sep 4 17:43:18.495889 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:43:18.495955 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:43:18.501982 extend-filesystems[1674]: Old size kept for /dev/sda9 Sep 4 17:43:18.501982 extend-filesystems[1674]: Found sr0 Sep 4 17:43:18.509560 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:43:18.509592 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:43:18.513538 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:43:18.513806 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:43:18.525474 update_engine[1690]: I0904 17:43:18.525416 1690 main.cc:92] Flatcar Update Engine starting Sep 4 17:43:18.531357 (ntainerd)[1715]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:43:18.535004 update_engine[1690]: I0904 17:43:18.534392 1690 update_check_scheduler.cc:74] Next update check in 8m7s Sep 4 17:43:18.536318 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:43:18.539034 tar[1702]: linux-amd64/helm Sep 4 17:43:18.551328 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:43:18.565474 systemd-logind[1689]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Sep 4 17:43:18.565708 systemd-logind[1689]: New seat seat0. Sep 4 17:43:18.573481 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:43:18.665512 bash[1746]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:43:18.667682 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:43:18.681749 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 17:43:18.706803 coreos-metadata[1671]: Sep 04 17:43:18.705 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 4 17:43:18.707162 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1753) Sep 4 17:43:18.711058 coreos-metadata[1671]: Sep 04 17:43:18.711 INFO Fetch successful Sep 4 17:43:18.712394 coreos-metadata[1671]: Sep 04 17:43:18.712 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 4 17:43:18.719187 coreos-metadata[1671]: Sep 04 17:43:18.719 INFO Fetch successful Sep 4 17:43:18.721320 coreos-metadata[1671]: Sep 04 17:43:18.721 INFO Fetching http://168.63.129.16/machine/34137d12-7f2c-4344-a666-e3c7576eadd2/e8ffc3f7%2D68df%2D48b1%2D8c0a%2Dd7c30ab0fb9c.%5Fci%2D4054.1.0%2Da%2Dc31d97b133?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 4 17:43:18.725559 coreos-metadata[1671]: Sep 04 17:43:18.725 INFO Fetch successful Sep 4 17:43:18.725559 coreos-metadata[1671]: Sep 04 17:43:18.725 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 4 17:43:18.741449 coreos-metadata[1671]: Sep 04 17:43:18.741 INFO Fetch successful Sep 4 17:43:18.802983 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 17:43:18.818037 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:43:18.886954 locksmithd[1735]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:43:19.234596 sshd_keygen[1711]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:43:19.266560 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:43:19.284251 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:43:19.292870 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 4 17:43:19.302678 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:43:19.302914 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:43:19.326243 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:43:19.356430 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:43:19.370496 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:43:19.375705 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 17:43:19.381638 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:43:19.398127 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 4 17:43:19.555455 tar[1702]: linux-amd64/LICENSE Sep 4 17:43:19.555455 tar[1702]: linux-amd64/README.md Sep 4 17:43:19.567563 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:43:19.756656 containerd[1715]: time="2024-09-04T17:43:19.755621100Z" level=info msg="starting containerd" revision=8ccfc03e4e2b73c22899202ae09d0caf906d3863 version=v1.7.20 Sep 4 17:43:19.790467 containerd[1715]: time="2024-09-04T17:43:19.790236400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:43:19.792900 containerd[1715]: time="2024-09-04T17:43:19.792225100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:43:19.792900 containerd[1715]: time="2024-09-04T17:43:19.792269200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:43:19.792900 containerd[1715]: time="2024-09-04T17:43:19.792291200Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:43:19.792900 containerd[1715]: time="2024-09-04T17:43:19.792455900Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:43:19.792900 containerd[1715]: time="2024-09-04T17:43:19.792476300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:43:19.792900 containerd[1715]: time="2024-09-04T17:43:19.792561300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:43:19.792900 containerd[1715]: time="2024-09-04T17:43:19.792581400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:43:19.792900 containerd[1715]: time="2024-09-04T17:43:19.792824600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:43:19.792900 containerd[1715]: time="2024-09-04T17:43:19.792849600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:43:19.792900 containerd[1715]: time="2024-09-04T17:43:19.792874700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:43:19.792900 containerd[1715]: time="2024-09-04T17:43:19.792892600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:43:19.793359 containerd[1715]: time="2024-09-04T17:43:19.793036600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:43:19.793359 containerd[1715]: time="2024-09-04T17:43:19.793277500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:43:19.793848 containerd[1715]: time="2024-09-04T17:43:19.793428800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:43:19.793848 containerd[1715]: time="2024-09-04T17:43:19.793453400Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:43:19.793848 containerd[1715]: time="2024-09-04T17:43:19.793546000Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:43:19.793848 containerd[1715]: time="2024-09-04T17:43:19.793597100Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:43:19.805210 containerd[1715]: time="2024-09-04T17:43:19.805170400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:43:19.805316 containerd[1715]: time="2024-09-04T17:43:19.805238400Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:43:19.805316 containerd[1715]: time="2024-09-04T17:43:19.805263900Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:43:19.805316 containerd[1715]: time="2024-09-04T17:43:19.805300200Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:43:19.805423 containerd[1715]: time="2024-09-04T17:43:19.805324100Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:43:19.806631 containerd[1715]: time="2024-09-04T17:43:19.805481500Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:43:19.806631 containerd[1715]: time="2024-09-04T17:43:19.805782100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:43:19.806631 containerd[1715]: time="2024-09-04T17:43:19.805907100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:43:19.806631 containerd[1715]: time="2024-09-04T17:43:19.805930200Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:43:19.806631 containerd[1715]: time="2024-09-04T17:43:19.806025600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:43:19.806631 containerd[1715]: time="2024-09-04T17:43:19.806060100Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:43:19.806631 containerd[1715]: time="2024-09-04T17:43:19.806095200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:43:19.806631 containerd[1715]: time="2024-09-04T17:43:19.806150000Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:43:19.806631 containerd[1715]: time="2024-09-04T17:43:19.806173100Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:43:19.806631 containerd[1715]: time="2024-09-04T17:43:19.806193200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:43:19.806631 containerd[1715]: time="2024-09-04T17:43:19.806224800Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:43:19.806631 containerd[1715]: time="2024-09-04T17:43:19.806241900Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:43:19.806631 containerd[1715]: time="2024-09-04T17:43:19.806260200Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:43:19.806631 containerd[1715]: time="2024-09-04T17:43:19.806297900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:43:19.807142 containerd[1715]: time="2024-09-04T17:43:19.806322400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:43:19.807142 containerd[1715]: time="2024-09-04T17:43:19.806341000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:43:19.807142 containerd[1715]: time="2024-09-04T17:43:19.806371900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:43:19.807142 containerd[1715]: time="2024-09-04T17:43:19.806390300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:43:19.807142 containerd[1715]: time="2024-09-04T17:43:19.806416000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:43:19.807142 containerd[1715]: time="2024-09-04T17:43:19.806448900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:43:19.807142 containerd[1715]: time="2024-09-04T17:43:19.806467200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:43:19.807142 containerd[1715]: time="2024-09-04T17:43:19.806485000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:43:19.807142 containerd[1715]: time="2024-09-04T17:43:19.806504300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:43:19.807142 containerd[1715]: time="2024-09-04T17:43:19.806533100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:43:19.807142 containerd[1715]: time="2024-09-04T17:43:19.806550000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:43:19.807142 containerd[1715]: time="2024-09-04T17:43:19.806567000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:43:19.807142 containerd[1715]: time="2024-09-04T17:43:19.806618700Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:43:19.807142 containerd[1715]: time="2024-09-04T17:43:19.806650100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:43:19.807142 containerd[1715]: time="2024-09-04T17:43:19.806667000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:43:19.807641 containerd[1715]: time="2024-09-04T17:43:19.806695000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:43:19.807641 containerd[1715]: time="2024-09-04T17:43:19.806843000Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:43:19.807641 containerd[1715]: time="2024-09-04T17:43:19.806871800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:43:19.807641 containerd[1715]: time="2024-09-04T17:43:19.806888500Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:43:19.807641 containerd[1715]: time="2024-09-04T17:43:19.806921000Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:43:19.807641 containerd[1715]: time="2024-09-04T17:43:19.806952200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:43:19.807641 containerd[1715]: time="2024-09-04T17:43:19.806972200Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:43:19.807641 containerd[1715]: time="2024-09-04T17:43:19.806985400Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:43:19.807641 containerd[1715]: time="2024-09-04T17:43:19.807000200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:43:19.808003 containerd[1715]: time="2024-09-04T17:43:19.807469300Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:43:19.808003 containerd[1715]: time="2024-09-04T17:43:19.807561200Z" level=info msg="Connect containerd service" Sep 4 17:43:19.808003 containerd[1715]: time="2024-09-04T17:43:19.807615500Z" level=info msg="using legacy CRI server" Sep 4 17:43:19.808003 containerd[1715]: time="2024-09-04T17:43:19.807626900Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:43:19.808003 containerd[1715]: time="2024-09-04T17:43:19.807833300Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:43:19.809261 containerd[1715]: time="2024-09-04T17:43:19.808752600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:43:19.809261 containerd[1715]: time="2024-09-04T17:43:19.808828300Z" level=info msg="Start subscribing containerd event" Sep 4 17:43:19.809261 containerd[1715]: time="2024-09-04T17:43:19.808885100Z" level=info msg="Start recovering state" Sep 4 17:43:19.809261 containerd[1715]: time="2024-09-04T17:43:19.808966300Z" level=info msg="Start event monitor" Sep 4 17:43:19.809261 containerd[1715]: time="2024-09-04T17:43:19.808988600Z" level=info msg="Start snapshots syncer" Sep 4 17:43:19.809261 containerd[1715]: time="2024-09-04T17:43:19.808999800Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:43:19.809261 containerd[1715]: time="2024-09-04T17:43:19.809009600Z" level=info msg="Start streaming server" Sep 4 17:43:19.813462 containerd[1715]: time="2024-09-04T17:43:19.809548600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:43:19.813462 containerd[1715]: time="2024-09-04T17:43:19.809614300Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:43:19.813462 containerd[1715]: time="2024-09-04T17:43:19.809772600Z" level=info msg="containerd successfully booted in 0.055017s" Sep 4 17:43:19.810168 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:43:19.923509 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:43:19.927122 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:43:19.930999 systemd[1]: Startup finished in 926ms (firmware) + 31.224s (loader) + 990ms (kernel) + 11.438s (initrd) + 14.130s (userspace) = 58.711s. Sep 4 17:43:19.935350 (kubelet)[1830]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:43:20.383831 login[1811]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 4 17:43:20.389491 login[1812]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 4 17:43:20.396245 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:43:20.408317 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:43:20.415067 systemd-logind[1689]: New session 1 of user core. Sep 4 17:43:20.422491 systemd-logind[1689]: New session 2 of user core. Sep 4 17:43:20.435643 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:43:20.444322 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:43:20.450776 (systemd)[1841]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:43:20.640760 systemd[1841]: Queued start job for default target default.target. Sep 4 17:43:20.647234 systemd[1841]: Created slice app.slice - User Application Slice. Sep 4 17:43:20.647265 systemd[1841]: Reached target paths.target - Paths. Sep 4 17:43:20.647282 systemd[1841]: Reached target timers.target - Timers. Sep 4 17:43:20.651071 systemd[1841]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:43:20.665068 systemd[1841]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:43:20.666123 systemd[1841]: Reached target sockets.target - Sockets. Sep 4 17:43:20.666228 systemd[1841]: Reached target basic.target - Basic System. Sep 4 17:43:20.666356 systemd[1841]: Reached target default.target - Main User Target. Sep 4 17:43:20.666476 systemd[1841]: Startup finished in 208ms. Sep 4 17:43:20.666495 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:43:20.672104 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:43:20.673914 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:43:20.699541 kubelet[1830]: E0904 17:43:20.699460 1830 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:43:20.701988 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:43:20.702171 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:43:20.702598 systemd[1]: kubelet.service: Consumed 1.019s CPU time. Sep 4 17:43:21.360996 waagent[1816]: 2024-09-04T17:43:21.360879Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Sep 4 17:43:21.364878 waagent[1816]: 2024-09-04T17:43:21.364813Z INFO Daemon Daemon OS: flatcar 4054.1.0 Sep 4 17:43:21.367565 waagent[1816]: 2024-09-04T17:43:21.367512Z INFO Daemon Daemon Python: 3.11.9 Sep 4 17:43:21.370462 waagent[1816]: 2024-09-04T17:43:21.370406Z INFO Daemon Daemon Run daemon Sep 4 17:43:21.372805 waagent[1816]: 2024-09-04T17:43:21.372756Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4054.1.0' Sep 4 17:43:21.376972 waagent[1816]: 2024-09-04T17:43:21.376908Z INFO Daemon Daemon Using waagent for provisioning Sep 4 17:43:21.379642 waagent[1816]: 2024-09-04T17:43:21.379594Z INFO Daemon Daemon Activate resource disk Sep 4 17:43:21.382220 waagent[1816]: 2024-09-04T17:43:21.382165Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 4 17:43:21.389394 waagent[1816]: 2024-09-04T17:43:21.389340Z INFO Daemon Daemon Found device: None Sep 4 17:43:21.392241 waagent[1816]: 2024-09-04T17:43:21.392185Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 4 17:43:21.397037 waagent[1816]: 2024-09-04T17:43:21.396987Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 4 17:43:21.404367 waagent[1816]: 2024-09-04T17:43:21.404303Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 4 17:43:21.407689 waagent[1816]: 2024-09-04T17:43:21.407631Z INFO Daemon Daemon Running default provisioning handler Sep 4 17:43:21.417618 waagent[1816]: 2024-09-04T17:43:21.417547Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 4 17:43:21.423948 waagent[1816]: 2024-09-04T17:43:21.422402Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 4 17:43:21.423948 waagent[1816]: 2024-09-04T17:43:21.423437Z INFO Daemon Daemon cloud-init is enabled: False Sep 4 17:43:21.423948 waagent[1816]: 2024-09-04T17:43:21.423880Z INFO Daemon Daemon Copying ovf-env.xml Sep 4 17:43:21.509356 waagent[1816]: 2024-09-04T17:43:21.506355Z INFO Daemon Daemon Successfully mounted dvd Sep 4 17:43:21.537595 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 4 17:43:21.540541 waagent[1816]: 2024-09-04T17:43:21.540470Z INFO Daemon Daemon Detect protocol endpoint Sep 4 17:43:21.555870 waagent[1816]: 2024-09-04T17:43:21.540766Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 4 17:43:21.555870 waagent[1816]: 2024-09-04T17:43:21.541855Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 4 17:43:21.555870 waagent[1816]: 2024-09-04T17:43:21.542736Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 4 17:43:21.555870 waagent[1816]: 2024-09-04T17:43:21.543775Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 4 17:43:21.555870 waagent[1816]: 2024-09-04T17:43:21.544760Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 4 17:43:21.597969 waagent[1816]: 2024-09-04T17:43:21.597905Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 4 17:43:21.606966 waagent[1816]: 2024-09-04T17:43:21.598455Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 4 17:43:21.606966 waagent[1816]: 2024-09-04T17:43:21.598891Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 4 17:43:21.772496 waagent[1816]: 2024-09-04T17:43:21.772334Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 4 17:43:21.776197 waagent[1816]: 2024-09-04T17:43:21.776122Z INFO Daemon Daemon Forcing an update of the goal state. Sep 4 17:43:21.782926 waagent[1816]: 2024-09-04T17:43:21.782868Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 4 17:43:21.801591 waagent[1816]: 2024-09-04T17:43:21.801520Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Sep 4 17:43:21.821549 waagent[1816]: 2024-09-04T17:43:21.802286Z INFO Daemon Sep 4 17:43:21.821549 waagent[1816]: 2024-09-04T17:43:21.802498Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: f59a9059-1ea2-493e-89a2-a41916695cdc eTag: 10378492740200228101 source: Fabric] Sep 4 17:43:21.821549 waagent[1816]: 2024-09-04T17:43:21.803240Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 4 17:43:21.821549 waagent[1816]: 2024-09-04T17:43:21.804035Z INFO Daemon Sep 4 17:43:21.821549 waagent[1816]: 2024-09-04T17:43:21.804131Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 4 17:43:21.821549 waagent[1816]: 2024-09-04T17:43:21.809417Z INFO Daemon Daemon Downloading artifacts profile blob Sep 4 17:43:21.890883 waagent[1816]: 2024-09-04T17:43:21.890793Z INFO Daemon Downloaded certificate {'thumbprint': 'CB0F3A1391A9191BB6A0053DCF289B64707A5F37', 'hasPrivateKey': False} Sep 4 17:43:21.896593 waagent[1816]: 2024-09-04T17:43:21.896526Z INFO Daemon Downloaded certificate {'thumbprint': '65E28834450A7446A6671EED6DCC33488DC057DF', 'hasPrivateKey': True} Sep 4 17:43:21.903812 waagent[1816]: 2024-09-04T17:43:21.897152Z INFO Daemon Fetch goal state completed Sep 4 17:43:21.909333 waagent[1816]: 2024-09-04T17:43:21.909282Z INFO Daemon Daemon Starting provisioning Sep 4 17:43:21.916265 waagent[1816]: 2024-09-04T17:43:21.909504Z INFO Daemon Daemon Handle ovf-env.xml. Sep 4 17:43:21.916265 waagent[1816]: 2024-09-04T17:43:21.910450Z INFO Daemon Daemon Set hostname [ci-4054.1.0-a-c31d97b133] Sep 4 17:43:21.932687 waagent[1816]: 2024-09-04T17:43:21.932612Z INFO Daemon Daemon Publish hostname [ci-4054.1.0-a-c31d97b133] Sep 4 17:43:21.940666 waagent[1816]: 2024-09-04T17:43:21.933162Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 4 17:43:21.940666 waagent[1816]: 2024-09-04T17:43:21.934242Z INFO Daemon Daemon Primary interface is [eth0] Sep 4 17:43:21.959437 systemd-networkd[1497]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:43:21.959448 systemd-networkd[1497]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:43:21.959499 systemd-networkd[1497]: eth0: DHCP lease lost Sep 4 17:43:21.960713 waagent[1816]: 2024-09-04T17:43:21.960632Z INFO Daemon Daemon Create user account if not exists Sep 4 17:43:21.978049 waagent[1816]: 2024-09-04T17:43:21.961081Z INFO Daemon Daemon User core already exists, skip useradd Sep 4 17:43:21.978049 waagent[1816]: 2024-09-04T17:43:21.961630Z INFO Daemon Daemon Configure sudoer Sep 4 17:43:21.978049 waagent[1816]: 2024-09-04T17:43:21.962801Z INFO Daemon Daemon Configure sshd Sep 4 17:43:21.978049 waagent[1816]: 2024-09-04T17:43:21.963601Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 4 17:43:21.978049 waagent[1816]: 2024-09-04T17:43:21.964495Z INFO Daemon Daemon Deploy ssh public key. Sep 4 17:43:21.981044 systemd-networkd[1497]: eth0: DHCPv6 lease lost Sep 4 17:43:22.009026 systemd-networkd[1497]: eth0: DHCPv4 address 10.200.4.10/24, gateway 10.200.4.1 acquired from 168.63.129.16 Sep 4 17:43:23.250238 waagent[1816]: 2024-09-04T17:43:23.250166Z INFO Daemon Daemon Provisioning complete Sep 4 17:43:23.263586 waagent[1816]: 2024-09-04T17:43:23.263512Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 4 17:43:23.271619 waagent[1816]: 2024-09-04T17:43:23.263875Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 4 17:43:23.271619 waagent[1816]: 2024-09-04T17:43:23.264789Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Sep 4 17:43:23.390069 waagent[1900]: 2024-09-04T17:43:23.389976Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Sep 4 17:43:23.390512 waagent[1900]: 2024-09-04T17:43:23.390136Z INFO ExtHandler ExtHandler OS: flatcar 4054.1.0 Sep 4 17:43:23.390512 waagent[1900]: 2024-09-04T17:43:23.390214Z INFO ExtHandler ExtHandler Python: 3.11.9 Sep 4 17:43:23.412576 waagent[1900]: 2024-09-04T17:43:23.412491Z INFO ExtHandler ExtHandler Distro: flatcar-4054.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 4 17:43:23.412782 waagent[1900]: 2024-09-04T17:43:23.412736Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 17:43:23.412873 waagent[1900]: 2024-09-04T17:43:23.412831Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 17:43:23.420952 waagent[1900]: 2024-09-04T17:43:23.420856Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 4 17:43:23.426619 waagent[1900]: 2024-09-04T17:43:23.426551Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Sep 4 17:43:23.427195 waagent[1900]: 2024-09-04T17:43:23.427130Z INFO ExtHandler Sep 4 17:43:23.427299 waagent[1900]: 2024-09-04T17:43:23.427243Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: e77fc42b-4dd2-45a9-bb11-173cca488a46 eTag: 10378492740200228101 source: Fabric] Sep 4 17:43:23.427634 waagent[1900]: 2024-09-04T17:43:23.427583Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 4 17:43:23.428200 waagent[1900]: 2024-09-04T17:43:23.428143Z INFO ExtHandler Sep 4 17:43:23.428267 waagent[1900]: 2024-09-04T17:43:23.428226Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 4 17:43:23.431471 waagent[1900]: 2024-09-04T17:43:23.431425Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 4 17:43:23.566831 waagent[1900]: 2024-09-04T17:43:23.566687Z INFO ExtHandler Downloaded certificate {'thumbprint': 'CB0F3A1391A9191BB6A0053DCF289B64707A5F37', 'hasPrivateKey': False} Sep 4 17:43:23.567230 waagent[1900]: 2024-09-04T17:43:23.567175Z INFO ExtHandler Downloaded certificate {'thumbprint': '65E28834450A7446A6671EED6DCC33488DC057DF', 'hasPrivateKey': True} Sep 4 17:43:23.567658 waagent[1900]: 2024-09-04T17:43:23.567609Z INFO ExtHandler Fetch goal state completed Sep 4 17:43:23.582407 waagent[1900]: 2024-09-04T17:43:23.582349Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1900 Sep 4 17:43:23.582551 waagent[1900]: 2024-09-04T17:43:23.582507Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 4 17:43:23.584118 waagent[1900]: 2024-09-04T17:43:23.584068Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4054.1.0', '', 'Flatcar Container Linux by Kinvolk'] Sep 4 17:43:23.584485 waagent[1900]: 2024-09-04T17:43:23.584442Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 4 17:43:24.170973 waagent[1900]: 2024-09-04T17:43:24.170901Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 4 17:43:24.171254 waagent[1900]: 2024-09-04T17:43:24.171194Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 4 17:43:24.178291 waagent[1900]: 2024-09-04T17:43:24.178188Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 4 17:43:24.185293 systemd[1]: Reloading requested from client PID 1915 ('systemctl') (unit waagent.service)... Sep 4 17:43:24.185310 systemd[1]: Reloading... Sep 4 17:43:24.273004 zram_generator::config[1943]: No configuration found. Sep 4 17:43:24.489745 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:43:24.564878 systemd[1]: Reloading finished in 379 ms. Sep 4 17:43:24.592455 waagent[1900]: 2024-09-04T17:43:24.590512Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Sep 4 17:43:24.599227 systemd[1]: Reloading requested from client PID 2002 ('systemctl') (unit waagent.service)... Sep 4 17:43:24.599243 systemd[1]: Reloading... Sep 4 17:43:24.684057 zram_generator::config[2034]: No configuration found. Sep 4 17:43:24.799474 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:43:24.874963 systemd[1]: Reloading finished in 275 ms. Sep 4 17:43:24.902579 waagent[1900]: 2024-09-04T17:43:24.898788Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 4 17:43:24.902579 waagent[1900]: 2024-09-04T17:43:24.899023Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 4 17:43:25.289315 waagent[1900]: 2024-09-04T17:43:25.289219Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 4 17:43:25.289985 waagent[1900]: 2024-09-04T17:43:25.289895Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Sep 4 17:43:25.290898 waagent[1900]: 2024-09-04T17:43:25.290838Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 4 17:43:25.292023 waagent[1900]: 2024-09-04T17:43:25.291959Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 17:43:25.292099 waagent[1900]: 2024-09-04T17:43:25.292037Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 17:43:25.292233 waagent[1900]: 2024-09-04T17:43:25.292159Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 17:43:25.292579 waagent[1900]: 2024-09-04T17:43:25.292526Z INFO EnvHandler ExtHandler Configure routes Sep 4 17:43:25.292702 waagent[1900]: 2024-09-04T17:43:25.292648Z INFO EnvHandler ExtHandler Gateway:None Sep 4 17:43:25.292789 waagent[1900]: 2024-09-04T17:43:25.292740Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 4 17:43:25.292892 waagent[1900]: 2024-09-04T17:43:25.292844Z INFO EnvHandler ExtHandler Routes:None Sep 4 17:43:25.293242 waagent[1900]: 2024-09-04T17:43:25.293192Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 17:43:25.293608 waagent[1900]: 2024-09-04T17:43:25.293539Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 4 17:43:25.293801 waagent[1900]: 2024-09-04T17:43:25.293752Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 4 17:43:25.294189 waagent[1900]: 2024-09-04T17:43:25.294137Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 4 17:43:25.294750 waagent[1900]: 2024-09-04T17:43:25.294684Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 4 17:43:25.295003 waagent[1900]: 2024-09-04T17:43:25.294914Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 4 17:43:25.295922 waagent[1900]: 2024-09-04T17:43:25.295866Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 4 17:43:25.296266 waagent[1900]: 2024-09-04T17:43:25.296094Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 4 17:43:25.296266 waagent[1900]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 4 17:43:25.296266 waagent[1900]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Sep 4 17:43:25.296266 waagent[1900]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 4 17:43:25.296266 waagent[1900]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 4 17:43:25.296266 waagent[1900]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 4 17:43:25.296266 waagent[1900]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 4 17:43:25.307954 waagent[1900]: 2024-09-04T17:43:25.306474Z INFO ExtHandler ExtHandler Sep 4 17:43:25.307954 waagent[1900]: 2024-09-04T17:43:25.306585Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 5a3a3965-ea8c-4224-b4b5-a2dd6b8a3477 correlation f1a6fc3c-d573-4818-b5b2-dcafe00303ee created: 2024-09-04T17:42:10.343781Z] Sep 4 17:43:25.307954 waagent[1900]: 2024-09-04T17:43:25.307053Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 4 17:43:25.307954 waagent[1900]: 2024-09-04T17:43:25.307880Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Sep 4 17:43:25.345121 waagent[1900]: 2024-09-04T17:43:25.345045Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: FB381249-5F94-4993-BC95-26EA987A6768;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Sep 4 17:43:25.369304 waagent[1900]: 2024-09-04T17:43:25.369227Z INFO MonitorHandler ExtHandler Network interfaces: Sep 4 17:43:25.369304 waagent[1900]: Executing ['ip', '-a', '-o', 'link']: Sep 4 17:43:25.369304 waagent[1900]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 4 17:43:25.369304 waagent[1900]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:d0:ee:91 brd ff:ff:ff:ff:ff:ff Sep 4 17:43:25.369304 waagent[1900]: 3: enP5022s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:d0:ee:91 brd ff:ff:ff:ff:ff:ff\ altname enP5022p0s2 Sep 4 17:43:25.369304 waagent[1900]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 4 17:43:25.369304 waagent[1900]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 4 17:43:25.369304 waagent[1900]: 2: eth0 inet 10.200.4.10/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 4 17:43:25.369304 waagent[1900]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 4 17:43:25.369304 waagent[1900]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 4 17:43:25.369304 waagent[1900]: 2: eth0 inet6 fe80::6245:bdff:fed0:ee91/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 4 17:43:25.369304 waagent[1900]: 3: enP5022s1 inet6 fe80::6245:bdff:fed0:ee91/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 4 17:43:25.399570 waagent[1900]: 2024-09-04T17:43:25.399505Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Sep 4 17:43:25.399570 waagent[1900]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:43:25.399570 waagent[1900]: pkts bytes target prot opt in out source destination Sep 4 17:43:25.399570 waagent[1900]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:43:25.399570 waagent[1900]: pkts bytes target prot opt in out source destination Sep 4 17:43:25.399570 waagent[1900]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:43:25.399570 waagent[1900]: pkts bytes target prot opt in out source destination Sep 4 17:43:25.399570 waagent[1900]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 4 17:43:25.399570 waagent[1900]: 10 1102 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 4 17:43:25.399570 waagent[1900]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 4 17:43:25.403414 waagent[1900]: 2024-09-04T17:43:25.403357Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 4 17:43:25.403414 waagent[1900]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:43:25.403414 waagent[1900]: pkts bytes target prot opt in out source destination Sep 4 17:43:25.403414 waagent[1900]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:43:25.403414 waagent[1900]: pkts bytes target prot opt in out source destination Sep 4 17:43:25.403414 waagent[1900]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:43:25.403414 waagent[1900]: pkts bytes target prot opt in out source destination Sep 4 17:43:25.403414 waagent[1900]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 4 17:43:25.403414 waagent[1900]: 14 1517 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 4 17:43:25.403414 waagent[1900]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 4 17:43:25.403802 waagent[1900]: 2024-09-04T17:43:25.403650Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 4 17:43:30.837187 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:43:30.844164 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:43:30.946768 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:43:30.960294 (kubelet)[2130]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:43:31.537327 kubelet[2130]: E0904 17:43:31.537266 2130 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:43:31.541548 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:43:31.541748 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:43:41.587312 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:43:41.595142 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:43:41.692570 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:43:41.697292 (kubelet)[2146]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:43:42.172788 chronyd[1687]: Selected source PHC0 Sep 4 17:43:42.264747 kubelet[2146]: E0904 17:43:42.264667 2146 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:43:42.267344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:43:42.267545 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:43:47.668660 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:43:47.673246 systemd[1]: Started sshd@0-10.200.4.10:22-10.200.16.10:59306.service - OpenSSH per-connection server daemon (10.200.16.10:59306). Sep 4 17:43:48.312529 sshd[2156]: Accepted publickey for core from 10.200.16.10 port 59306 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:43:48.314366 sshd[2156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:43:48.318884 systemd-logind[1689]: New session 3 of user core. Sep 4 17:43:48.325097 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:43:48.838800 systemd[1]: Started sshd@1-10.200.4.10:22-10.200.16.10:47884.service - OpenSSH per-connection server daemon (10.200.16.10:47884). Sep 4 17:43:49.440586 sshd[2161]: Accepted publickey for core from 10.200.16.10 port 47884 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:43:49.442364 sshd[2161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:43:49.446958 systemd-logind[1689]: New session 4 of user core. Sep 4 17:43:49.457130 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:43:49.857375 sshd[2161]: pam_unix(sshd:session): session closed for user core Sep 4 17:43:49.862091 systemd[1]: sshd@1-10.200.4.10:22-10.200.16.10:47884.service: Deactivated successfully. Sep 4 17:43:49.864367 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:43:49.865204 systemd-logind[1689]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:43:49.866435 systemd-logind[1689]: Removed session 4. Sep 4 17:43:49.961078 systemd[1]: Started sshd@2-10.200.4.10:22-10.200.16.10:47892.service - OpenSSH per-connection server daemon (10.200.16.10:47892). Sep 4 17:43:50.542899 sshd[2168]: Accepted publickey for core from 10.200.16.10 port 47892 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:43:50.544690 sshd[2168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:43:50.550230 systemd-logind[1689]: New session 5 of user core. Sep 4 17:43:50.556128 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:43:50.955031 sshd[2168]: pam_unix(sshd:session): session closed for user core Sep 4 17:43:50.959182 systemd[1]: sshd@2-10.200.4.10:22-10.200.16.10:47892.service: Deactivated successfully. Sep 4 17:43:50.961052 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:43:50.961734 systemd-logind[1689]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:43:50.962664 systemd-logind[1689]: Removed session 5. Sep 4 17:43:51.062182 systemd[1]: Started sshd@3-10.200.4.10:22-10.200.16.10:47904.service - OpenSSH per-connection server daemon (10.200.16.10:47904). Sep 4 17:43:51.643547 sshd[2175]: Accepted publickey for core from 10.200.16.10 port 47904 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:43:51.645295 sshd[2175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:43:51.650795 systemd-logind[1689]: New session 6 of user core. Sep 4 17:43:51.660101 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:43:52.061024 sshd[2175]: pam_unix(sshd:session): session closed for user core Sep 4 17:43:52.064795 systemd[1]: sshd@3-10.200.4.10:22-10.200.16.10:47904.service: Deactivated successfully. Sep 4 17:43:52.067058 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:43:52.068909 systemd-logind[1689]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:43:52.070241 systemd-logind[1689]: Removed session 6. Sep 4 17:43:52.165080 systemd[1]: Started sshd@4-10.200.4.10:22-10.200.16.10:47914.service - OpenSSH per-connection server daemon (10.200.16.10:47914). Sep 4 17:43:52.337327 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 17:43:52.342198 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:43:52.440250 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:43:52.444762 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:43:52.750683 sshd[2182]: Accepted publickey for core from 10.200.16.10 port 47914 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:43:52.751973 sshd[2182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:43:52.755991 systemd-logind[1689]: New session 7 of user core. Sep 4 17:43:52.764110 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:43:53.051001 kubelet[2192]: E0904 17:43:53.050849 2192 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:43:53.053880 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:43:53.054081 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:43:53.198919 sudo[2201]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:43:53.199303 sudo[2201]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:43:53.210245 sudo[2201]: pam_unix(sudo:session): session closed for user root Sep 4 17:43:53.311656 sshd[2182]: pam_unix(sshd:session): session closed for user core Sep 4 17:43:53.316917 systemd[1]: sshd@4-10.200.4.10:22-10.200.16.10:47914.service: Deactivated successfully. Sep 4 17:43:53.318854 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:43:53.319588 systemd-logind[1689]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:43:53.320626 systemd-logind[1689]: Removed session 7. Sep 4 17:43:53.419300 systemd[1]: Started sshd@5-10.200.4.10:22-10.200.16.10:47924.service - OpenSSH per-connection server daemon (10.200.16.10:47924). Sep 4 17:43:53.999225 sshd[2206]: Accepted publickey for core from 10.200.16.10 port 47924 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:43:54.001063 sshd[2206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:43:54.006162 systemd-logind[1689]: New session 8 of user core. Sep 4 17:43:54.017083 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:43:54.334000 sudo[2210]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:43:54.334363 sudo[2210]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:43:54.337789 sudo[2210]: pam_unix(sudo:session): session closed for user root Sep 4 17:43:54.342659 sudo[2209]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:43:54.343024 sudo[2209]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:43:54.356260 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:43:54.357826 auditctl[2213]: No rules Sep 4 17:43:54.358200 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:43:54.358412 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:43:54.361051 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:43:54.389715 augenrules[2231]: No rules Sep 4 17:43:54.391065 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:43:54.392161 sudo[2209]: pam_unix(sudo:session): session closed for user root Sep 4 17:43:54.485637 sshd[2206]: pam_unix(sshd:session): session closed for user core Sep 4 17:43:54.490316 systemd[1]: sshd@5-10.200.4.10:22-10.200.16.10:47924.service: Deactivated successfully. Sep 4 17:43:54.492432 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:43:54.493359 systemd-logind[1689]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:43:54.494395 systemd-logind[1689]: Removed session 8. Sep 4 17:43:54.594561 systemd[1]: Started sshd@6-10.200.4.10:22-10.200.16.10:47940.service - OpenSSH per-connection server daemon (10.200.16.10:47940). Sep 4 17:43:55.170012 sshd[2239]: Accepted publickey for core from 10.200.16.10 port 47940 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:43:55.171771 sshd[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:43:55.177406 systemd-logind[1689]: New session 9 of user core. Sep 4 17:43:55.188151 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:43:55.494279 sudo[2242]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:43:55.494713 sudo[2242]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:43:55.997292 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:43:55.997355 (dockerd)[2254]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:43:57.633753 dockerd[2254]: time="2024-09-04T17:43:57.633696509Z" level=info msg="Starting up" Sep 4 17:43:58.098029 dockerd[2254]: time="2024-09-04T17:43:58.097969450Z" level=info msg="Loading containers: start." Sep 4 17:43:58.245025 kernel: Initializing XFRM netlink socket Sep 4 17:43:58.375267 systemd-networkd[1497]: docker0: Link UP Sep 4 17:43:58.423120 dockerd[2254]: time="2024-09-04T17:43:58.423076277Z" level=info msg="Loading containers: done." Sep 4 17:43:58.445265 dockerd[2254]: time="2024-09-04T17:43:58.445214352Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:43:58.445456 dockerd[2254]: time="2024-09-04T17:43:58.445336655Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 4 17:43:58.445515 dockerd[2254]: time="2024-09-04T17:43:58.445466357Z" level=info msg="Daemon has completed initialization" Sep 4 17:43:58.499878 dockerd[2254]: time="2024-09-04T17:43:58.499531618Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:43:58.499758 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:44:00.277044 containerd[1715]: time="2024-09-04T17:44:00.277006971Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\"" Sep 4 17:44:01.066999 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Sep 4 17:44:01.071653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2285604854.mount: Deactivated successfully. Sep 4 17:44:02.972907 containerd[1715]: time="2024-09-04T17:44:02.972850736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:02.975845 containerd[1715]: time="2024-09-04T17:44:02.975780599Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.8: active requests=0, bytes read=35232957" Sep 4 17:44:02.978893 containerd[1715]: time="2024-09-04T17:44:02.978836464Z" level=info msg="ImageCreate event name:\"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:02.983713 containerd[1715]: time="2024-09-04T17:44:02.983650968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:02.984757 containerd[1715]: time="2024-09-04T17:44:02.984618189Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.8\" with image id \"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\", size \"35229749\" in 2.707572617s" Sep 4 17:44:02.984757 containerd[1715]: time="2024-09-04T17:44:02.984660589Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\" returns image reference \"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\"" Sep 4 17:44:03.010380 containerd[1715]: time="2024-09-04T17:44:03.009784029Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\"" Sep 4 17:44:03.087186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 4 17:44:03.092512 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:44:03.196818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:44:03.208309 (kubelet)[2460]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:44:03.377400 update_engine[1690]: I0904 17:44:03.377310 1690 update_attempter.cc:509] Updating boot flags... Sep 4 17:44:03.769823 kubelet[2460]: E0904 17:44:03.752306 2460 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:44:03.755696 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:44:03.755891 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:44:03.777993 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2480) Sep 4 17:44:05.614863 containerd[1715]: time="2024-09-04T17:44:05.614809090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:05.616890 containerd[1715]: time="2024-09-04T17:44:05.616852637Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.8: active requests=0, bytes read=32206214" Sep 4 17:44:05.620324 containerd[1715]: time="2024-09-04T17:44:05.620264017Z" level=info msg="ImageCreate event name:\"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:05.627818 containerd[1715]: time="2024-09-04T17:44:05.627757392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:05.629034 containerd[1715]: time="2024-09-04T17:44:05.629000121Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.8\" with image id \"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\", size \"33756152\" in 2.619172791s" Sep 4 17:44:05.629034 containerd[1715]: time="2024-09-04T17:44:05.629032022Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\" returns image reference \"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\"" Sep 4 17:44:05.652971 containerd[1715]: time="2024-09-04T17:44:05.652919280Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\"" Sep 4 17:44:07.579979 containerd[1715]: time="2024-09-04T17:44:07.579914395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:07.582081 containerd[1715]: time="2024-09-04T17:44:07.582021850Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.8: active requests=0, bytes read=17321515" Sep 4 17:44:07.585416 containerd[1715]: time="2024-09-04T17:44:07.585363138Z" level=info msg="ImageCreate event name:\"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:07.590554 containerd[1715]: time="2024-09-04T17:44:07.590502074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:07.591798 containerd[1715]: time="2024-09-04T17:44:07.591487500Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.8\" with image id \"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\", size \"18871471\" in 1.938353414s" Sep 4 17:44:07.591798 containerd[1715]: time="2024-09-04T17:44:07.591528301Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\" returns image reference \"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\"" Sep 4 17:44:07.613130 containerd[1715]: time="2024-09-04T17:44:07.613088369Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\"" Sep 4 17:44:09.010657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2985883973.mount: Deactivated successfully. Sep 4 17:44:09.461698 containerd[1715]: time="2024-09-04T17:44:09.461648169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:09.465264 containerd[1715]: time="2024-09-04T17:44:09.465198362Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.8: active requests=0, bytes read=28600388" Sep 4 17:44:09.468168 containerd[1715]: time="2024-09-04T17:44:09.468102639Z" level=info msg="ImageCreate event name:\"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:09.475678 containerd[1715]: time="2024-09-04T17:44:09.475614837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:09.476426 containerd[1715]: time="2024-09-04T17:44:09.476209852Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.8\" with image id \"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\", repo tag \"registry.k8s.io/kube-proxy:v1.29.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\", size \"28599399\" in 1.863072682s" Sep 4 17:44:09.476426 containerd[1715]: time="2024-09-04T17:44:09.476251553Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\" returns image reference \"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\"" Sep 4 17:44:09.497955 containerd[1715]: time="2024-09-04T17:44:09.497903624Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Sep 4 17:44:10.214143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2646573914.mount: Deactivated successfully. Sep 4 17:44:11.448449 containerd[1715]: time="2024-09-04T17:44:11.448391009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:11.451520 containerd[1715]: time="2024-09-04T17:44:11.451470890Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Sep 4 17:44:11.454907 containerd[1715]: time="2024-09-04T17:44:11.454855880Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:11.459510 containerd[1715]: time="2024-09-04T17:44:11.459476701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:11.460624 containerd[1715]: time="2024-09-04T17:44:11.460471227Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.962503302s" Sep 4 17:44:11.460624 containerd[1715]: time="2024-09-04T17:44:11.460515529Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Sep 4 17:44:11.483974 containerd[1715]: time="2024-09-04T17:44:11.483835943Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:44:12.121975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount815244217.mount: Deactivated successfully. Sep 4 17:44:12.166915 containerd[1715]: time="2024-09-04T17:44:12.166858312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:12.169545 containerd[1715]: time="2024-09-04T17:44:12.169482174Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Sep 4 17:44:12.173762 containerd[1715]: time="2024-09-04T17:44:12.173708375Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:12.178358 containerd[1715]: time="2024-09-04T17:44:12.178296484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:12.179593 containerd[1715]: time="2024-09-04T17:44:12.178997801Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 695.116457ms" Sep 4 17:44:12.179593 containerd[1715]: time="2024-09-04T17:44:12.179032802Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Sep 4 17:44:12.199052 containerd[1715]: time="2024-09-04T17:44:12.199008478Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 17:44:12.945593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1626387569.mount: Deactivated successfully. Sep 4 17:44:13.837115 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 4 17:44:13.845158 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:44:13.951603 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:44:13.956358 (kubelet)[2637]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:44:14.002739 kubelet[2637]: E0904 17:44:14.002676 2637 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:44:14.005091 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:44:14.005257 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:44:16.315951 containerd[1715]: time="2024-09-04T17:44:16.315873462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:16.317926 containerd[1715]: time="2024-09-04T17:44:16.317866123Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Sep 4 17:44:16.322345 containerd[1715]: time="2024-09-04T17:44:16.322285457Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:16.327644 containerd[1715]: time="2024-09-04T17:44:16.327583818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:16.329568 containerd[1715]: time="2024-09-04T17:44:16.328624949Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.129580171s" Sep 4 17:44:16.329568 containerd[1715]: time="2024-09-04T17:44:16.328663550Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Sep 4 17:44:19.172209 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:44:19.179221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:44:19.207677 systemd[1]: Reloading requested from client PID 2720 ('systemctl') (unit session-9.scope)... Sep 4 17:44:19.207893 systemd[1]: Reloading... Sep 4 17:44:19.317208 zram_generator::config[2757]: No configuration found. Sep 4 17:44:19.437698 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:44:19.516452 systemd[1]: Reloading finished in 307 ms. Sep 4 17:44:19.705675 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:44:19.705823 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:44:19.706499 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:44:19.713553 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:44:19.812627 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:44:19.818756 (kubelet)[2824]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:44:19.862120 kubelet[2824]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:44:19.862120 kubelet[2824]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:44:19.862120 kubelet[2824]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:44:20.443760 kubelet[2824]: I0904 17:44:20.443658 2824 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:44:20.687683 kubelet[2824]: I0904 17:44:20.687642 2824 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 4 17:44:20.687683 kubelet[2824]: I0904 17:44:20.687675 2824 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:44:20.687962 kubelet[2824]: I0904 17:44:20.687927 2824 server.go:919] "Client rotation is on, will bootstrap in background" Sep 4 17:44:20.705445 kubelet[2824]: E0904 17:44:20.705084 2824 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.4.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.4.10:6443: connect: connection refused Sep 4 17:44:20.706272 kubelet[2824]: I0904 17:44:20.706145 2824 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:44:20.715169 kubelet[2824]: I0904 17:44:20.715144 2824 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:44:20.717578 kubelet[2824]: I0904 17:44:20.717552 2824 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:44:20.718553 kubelet[2824]: I0904 17:44:20.718061 2824 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:44:20.718553 kubelet[2824]: I0904 17:44:20.718101 2824 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:44:20.718553 kubelet[2824]: I0904 17:44:20.718117 2824 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:44:20.718553 kubelet[2824]: I0904 17:44:20.718241 2824 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:44:20.718553 kubelet[2824]: I0904 17:44:20.718349 2824 kubelet.go:396] "Attempting to sync node with API server" Sep 4 17:44:20.718553 kubelet[2824]: I0904 17:44:20.718367 2824 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:44:20.718553 kubelet[2824]: I0904 17:44:20.718397 2824 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:44:20.718956 kubelet[2824]: I0904 17:44:20.718415 2824 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:44:20.723367 kubelet[2824]: W0904 17:44:20.722920 2824 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.4.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.10:6443: connect: connection refused Sep 4 17:44:20.723367 kubelet[2824]: E0904 17:44:20.723017 2824 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.4.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.10:6443: connect: connection refused Sep 4 17:44:20.723367 kubelet[2824]: I0904 17:44:20.723103 2824 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:44:20.726374 kubelet[2824]: I0904 17:44:20.726235 2824 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:44:20.728497 kubelet[2824]: W0904 17:44:20.727244 2824 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:44:20.728497 kubelet[2824]: I0904 17:44:20.727827 2824 server.go:1256] "Started kubelet" Sep 4 17:44:20.728497 kubelet[2824]: W0904 17:44:20.728407 2824 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.4.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054.1.0-a-c31d97b133&limit=500&resourceVersion=0": dial tcp 10.200.4.10:6443: connect: connection refused Sep 4 17:44:20.728497 kubelet[2824]: E0904 17:44:20.728453 2824 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.4.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054.1.0-a-c31d97b133&limit=500&resourceVersion=0": dial tcp 10.200.4.10:6443: connect: connection refused Sep 4 17:44:20.728690 kubelet[2824]: I0904 17:44:20.728609 2824 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:44:20.729023 kubelet[2824]: I0904 17:44:20.729000 2824 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:44:20.729094 kubelet[2824]: I0904 17:44:20.729064 2824 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:44:20.730031 kubelet[2824]: I0904 17:44:20.730012 2824 server.go:461] "Adding debug handlers to kubelet server" Sep 4 17:44:20.730972 kubelet[2824]: I0904 17:44:20.730955 2824 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:44:20.737520 kubelet[2824]: I0904 17:44:20.737308 2824 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:44:20.740464 kubelet[2824]: I0904 17:44:20.740438 2824 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:44:20.740543 kubelet[2824]: I0904 17:44:20.740508 2824 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:44:20.742419 kubelet[2824]: E0904 17:44:20.741982 2824 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.10:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.10:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4054.1.0-a-c31d97b133.17f21b83f09ce938 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4054.1.0-a-c31d97b133,UID:ci-4054.1.0-a-c31d97b133,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4054.1.0-a-c31d97b133,},FirstTimestamp:2024-09-04 17:44:20.727802168 +0000 UTC m=+0.904658332,LastTimestamp:2024-09-04 17:44:20.727802168 +0000 UTC m=+0.904658332,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4054.1.0-a-c31d97b133,}" Sep 4 17:44:20.742419 kubelet[2824]: E0904 17:44:20.742093 2824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054.1.0-a-c31d97b133?timeout=10s\": dial tcp 10.200.4.10:6443: connect: connection refused" interval="200ms" Sep 4 17:44:20.743044 kubelet[2824]: I0904 17:44:20.743024 2824 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:44:20.743139 kubelet[2824]: I0904 17:44:20.743118 2824 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:44:20.743870 kubelet[2824]: W0904 17:44:20.743825 2824 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.4.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.10:6443: connect: connection refused Sep 4 17:44:20.743969 kubelet[2824]: E0904 17:44:20.743876 2824 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.4.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.10:6443: connect: connection refused Sep 4 17:44:20.744083 kubelet[2824]: E0904 17:44:20.744062 2824 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:44:20.744678 kubelet[2824]: I0904 17:44:20.744657 2824 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:44:20.777188 kubelet[2824]: I0904 17:44:20.777162 2824 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:44:20.777188 kubelet[2824]: I0904 17:44:20.777182 2824 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:44:20.777367 kubelet[2824]: I0904 17:44:20.777216 2824 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:44:20.784370 kubelet[2824]: I0904 17:44:20.783775 2824 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:44:20.785254 kubelet[2824]: I0904 17:44:20.785230 2824 policy_none.go:49] "None policy: Start" Sep 4 17:44:20.785990 kubelet[2824]: I0904 17:44:20.785969 2824 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:44:20.786070 kubelet[2824]: I0904 17:44:20.786001 2824 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:44:20.786070 kubelet[2824]: I0904 17:44:20.786036 2824 kubelet.go:2329] "Starting kubelet main sync loop" Sep 4 17:44:20.786160 kubelet[2824]: E0904 17:44:20.786087 2824 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:44:20.788896 kubelet[2824]: W0904 17:44:20.787666 2824 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.4.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.10:6443: connect: connection refused Sep 4 17:44:20.788896 kubelet[2824]: E0904 17:44:20.788226 2824 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.4.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.10:6443: connect: connection refused Sep 4 17:44:20.788896 kubelet[2824]: I0904 17:44:20.788347 2824 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:44:20.788896 kubelet[2824]: I0904 17:44:20.788471 2824 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:44:20.798384 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:44:20.807829 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:44:20.810968 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:44:20.822066 kubelet[2824]: I0904 17:44:20.821621 2824 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:44:20.822066 kubelet[2824]: I0904 17:44:20.821927 2824 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:44:20.824095 kubelet[2824]: E0904 17:44:20.824070 2824 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4054.1.0-a-c31d97b133\" not found" Sep 4 17:44:20.839413 kubelet[2824]: I0904 17:44:20.839386 2824 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054.1.0-a-c31d97b133" Sep 4 17:44:20.839771 kubelet[2824]: E0904 17:44:20.839751 2824 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.4.10:6443/api/v1/nodes\": dial tcp 10.200.4.10:6443: connect: connection refused" node="ci-4054.1.0-a-c31d97b133" Sep 4 17:44:20.887166 kubelet[2824]: I0904 17:44:20.887120 2824 topology_manager.go:215] "Topology Admit Handler" podUID="a0c24a1de05ee65e1c058628873d5174" podNamespace="kube-system" podName="kube-apiserver-ci-4054.1.0-a-c31d97b133" Sep 4 17:44:20.889476 kubelet[2824]: I0904 17:44:20.889328 2824 topology_manager.go:215] "Topology Admit Handler" podUID="54b67807e5d18fd51c4c077fe34d7d7e" podNamespace="kube-system" podName="kube-controller-manager-ci-4054.1.0-a-c31d97b133" Sep 4 17:44:20.891423 kubelet[2824]: I0904 17:44:20.891092 2824 topology_manager.go:215] "Topology Admit Handler" podUID="9e7c5292bae8365787e56580c0f5de5d" podNamespace="kube-system" podName="kube-scheduler-ci-4054.1.0-a-c31d97b133" Sep 4 17:44:20.899157 systemd[1]: Created slice kubepods-burstable-poda0c24a1de05ee65e1c058628873d5174.slice - libcontainer container kubepods-burstable-poda0c24a1de05ee65e1c058628873d5174.slice. Sep 4 17:44:20.921117 systemd[1]: Created slice kubepods-burstable-pod54b67807e5d18fd51c4c077fe34d7d7e.slice - libcontainer container kubepods-burstable-pod54b67807e5d18fd51c4c077fe34d7d7e.slice. Sep 4 17:44:20.925276 systemd[1]: Created slice kubepods-burstable-pod9e7c5292bae8365787e56580c0f5de5d.slice - libcontainer container kubepods-burstable-pod9e7c5292bae8365787e56580c0f5de5d.slice. Sep 4 17:44:20.942488 kubelet[2824]: E0904 17:44:20.942442 2824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054.1.0-a-c31d97b133?timeout=10s\": dial tcp 10.200.4.10:6443: connect: connection refused" interval="400ms" Sep 4 17:44:21.042133 kubelet[2824]: I0904 17:44:21.041727 2824 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a0c24a1de05ee65e1c058628873d5174-ca-certs\") pod \"kube-apiserver-ci-4054.1.0-a-c31d97b133\" (UID: \"a0c24a1de05ee65e1c058628873d5174\") " pod="kube-system/kube-apiserver-ci-4054.1.0-a-c31d97b133" Sep 4 17:44:21.042133 kubelet[2824]: I0904 17:44:21.041782 2824 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a0c24a1de05ee65e1c058628873d5174-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4054.1.0-a-c31d97b133\" (UID: \"a0c24a1de05ee65e1c058628873d5174\") " pod="kube-system/kube-apiserver-ci-4054.1.0-a-c31d97b133" Sep 4 17:44:21.042133 kubelet[2824]: I0904 17:44:21.041817 2824 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/54b67807e5d18fd51c4c077fe34d7d7e-ca-certs\") pod \"kube-controller-manager-ci-4054.1.0-a-c31d97b133\" (UID: \"54b67807e5d18fd51c4c077fe34d7d7e\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-c31d97b133" Sep 4 17:44:21.042133 kubelet[2824]: I0904 17:44:21.041845 2824 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9e7c5292bae8365787e56580c0f5de5d-kubeconfig\") pod \"kube-scheduler-ci-4054.1.0-a-c31d97b133\" (UID: \"9e7c5292bae8365787e56580c0f5de5d\") " pod="kube-system/kube-scheduler-ci-4054.1.0-a-c31d97b133" Sep 4 17:44:21.042133 kubelet[2824]: I0904 17:44:21.041875 2824 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a0c24a1de05ee65e1c058628873d5174-k8s-certs\") pod \"kube-apiserver-ci-4054.1.0-a-c31d97b133\" (UID: \"a0c24a1de05ee65e1c058628873d5174\") " pod="kube-system/kube-apiserver-ci-4054.1.0-a-c31d97b133" Sep 4 17:44:21.042407 kubelet[2824]: I0904 17:44:21.041901 2824 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/54b67807e5d18fd51c4c077fe34d7d7e-flexvolume-dir\") pod \"kube-controller-manager-ci-4054.1.0-a-c31d97b133\" (UID: \"54b67807e5d18fd51c4c077fe34d7d7e\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-c31d97b133" Sep 4 17:44:21.042407 kubelet[2824]: I0904 17:44:21.041926 2824 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/54b67807e5d18fd51c4c077fe34d7d7e-k8s-certs\") pod \"kube-controller-manager-ci-4054.1.0-a-c31d97b133\" (UID: \"54b67807e5d18fd51c4c077fe34d7d7e\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-c31d97b133" Sep 4 17:44:21.042407 kubelet[2824]: I0904 17:44:21.041978 2824 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/54b67807e5d18fd51c4c077fe34d7d7e-kubeconfig\") pod \"kube-controller-manager-ci-4054.1.0-a-c31d97b133\" (UID: \"54b67807e5d18fd51c4c077fe34d7d7e\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-c31d97b133" Sep 4 17:44:21.042407 kubelet[2824]: I0904 17:44:21.042025 2824 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54b67807e5d18fd51c4c077fe34d7d7e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4054.1.0-a-c31d97b133\" (UID: \"54b67807e5d18fd51c4c077fe34d7d7e\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-c31d97b133" Sep 4 17:44:21.043573 kubelet[2824]: I0904 17:44:21.043107 2824 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054.1.0-a-c31d97b133" Sep 4 17:44:21.043573 kubelet[2824]: E0904 17:44:21.043463 2824 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.4.10:6443/api/v1/nodes\": dial tcp 10.200.4.10:6443: connect: connection refused" node="ci-4054.1.0-a-c31d97b133" Sep 4 17:44:21.219421 containerd[1715]: time="2024-09-04T17:44:21.219366994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4054.1.0-a-c31d97b133,Uid:a0c24a1de05ee65e1c058628873d5174,Namespace:kube-system,Attempt:0,}" Sep 4 17:44:21.225121 containerd[1715]: time="2024-09-04T17:44:21.225083633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4054.1.0-a-c31d97b133,Uid:54b67807e5d18fd51c4c077fe34d7d7e,Namespace:kube-system,Attempt:0,}" Sep 4 17:44:21.228817 containerd[1715]: time="2024-09-04T17:44:21.228609720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4054.1.0-a-c31d97b133,Uid:9e7c5292bae8365787e56580c0f5de5d,Namespace:kube-system,Attempt:0,}" Sep 4 17:44:21.343153 kubelet[2824]: E0904 17:44:21.343117 2824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054.1.0-a-c31d97b133?timeout=10s\": dial tcp 10.200.4.10:6443: connect: connection refused" interval="800ms" Sep 4 17:44:21.445539 kubelet[2824]: I0904 17:44:21.445501 2824 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054.1.0-a-c31d97b133" Sep 4 17:44:21.445953 kubelet[2824]: E0904 17:44:21.445904 2824 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.4.10:6443/api/v1/nodes\": dial tcp 10.200.4.10:6443: connect: connection refused" node="ci-4054.1.0-a-c31d97b133" Sep 4 17:44:21.534719 kubelet[2824]: W0904 17:44:21.534644 2824 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.4.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.10:6443: connect: connection refused Sep 4 17:44:21.534719 kubelet[2824]: E0904 17:44:21.534721 2824 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.4.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.10:6443: connect: connection refused Sep 4 17:44:21.672376 kubelet[2824]: W0904 17:44:21.672224 2824 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.4.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054.1.0-a-c31d97b133&limit=500&resourceVersion=0": dial tcp 10.200.4.10:6443: connect: connection refused Sep 4 17:44:21.672376 kubelet[2824]: E0904 17:44:21.672296 2824 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.4.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054.1.0-a-c31d97b133&limit=500&resourceVersion=0": dial tcp 10.200.4.10:6443: connect: connection refused Sep 4 17:44:21.685830 kubelet[2824]: W0904 17:44:21.685766 2824 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.4.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.10:6443: connect: connection refused Sep 4 17:44:21.685830 kubelet[2824]: E0904 17:44:21.685835 2824 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.4.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.10:6443: connect: connection refused Sep 4 17:44:22.124580 kubelet[2824]: W0904 17:44:22.124510 2824 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.4.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.10:6443: connect: connection refused Sep 4 17:44:22.124580 kubelet[2824]: E0904 17:44:22.124582 2824 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.4.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.10:6443: connect: connection refused Sep 4 17:44:22.144261 kubelet[2824]: E0904 17:44:22.144222 2824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054.1.0-a-c31d97b133?timeout=10s\": dial tcp 10.200.4.10:6443: connect: connection refused" interval="1.6s" Sep 4 17:44:22.248964 kubelet[2824]: I0904 17:44:22.248906 2824 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054.1.0-a-c31d97b133" Sep 4 17:44:22.249375 kubelet[2824]: E0904 17:44:22.249347 2824 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.4.10:6443/api/v1/nodes\": dial tcp 10.200.4.10:6443: connect: connection refused" node="ci-4054.1.0-a-c31d97b133" Sep 4 17:44:22.814513 kubelet[2824]: E0904 17:44:22.814472 2824 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.4.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.4.10:6443: connect: connection refused Sep 4 17:44:23.548739 kubelet[2824]: W0904 17:44:23.548692 2824 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.4.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.10:6443: connect: connection refused Sep 4 17:44:23.548739 kubelet[2824]: E0904 17:44:23.548742 2824 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.4.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.10:6443: connect: connection refused Sep 4 17:44:23.560374 kubelet[2824]: W0904 17:44:23.560337 2824 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.4.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.10:6443: connect: connection refused Sep 4 17:44:23.560374 kubelet[2824]: E0904 17:44:23.560380 2824 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.4.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.10:6443: connect: connection refused Sep 4 17:44:23.744959 kubelet[2824]: E0904 17:44:23.744908 2824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054.1.0-a-c31d97b133?timeout=10s\": dial tcp 10.200.4.10:6443: connect: connection refused" interval="3.2s" Sep 4 17:44:23.840372 kubelet[2824]: W0904 17:44:23.840337 2824 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.4.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.10:6443: connect: connection refused Sep 4 17:44:23.840508 kubelet[2824]: E0904 17:44:23.840381 2824 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.4.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.10:6443: connect: connection refused Sep 4 17:44:23.851687 kubelet[2824]: I0904 17:44:23.851269 2824 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054.1.0-a-c31d97b133" Sep 4 17:44:23.851687 kubelet[2824]: E0904 17:44:23.851632 2824 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.4.10:6443/api/v1/nodes\": dial tcp 10.200.4.10:6443: connect: connection refused" node="ci-4054.1.0-a-c31d97b133" Sep 4 17:44:24.170628 kubelet[2824]: E0904 17:44:24.170509 2824 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.10:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.10:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4054.1.0-a-c31d97b133.17f21b83f09ce938 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4054.1.0-a-c31d97b133,UID:ci-4054.1.0-a-c31d97b133,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4054.1.0-a-c31d97b133,},FirstTimestamp:2024-09-04 17:44:20.727802168 +0000 UTC m=+0.904658332,LastTimestamp:2024-09-04 17:44:20.727802168 +0000 UTC m=+0.904658332,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4054.1.0-a-c31d97b133,}" Sep 4 17:44:24.384270 kubelet[2824]: W0904 17:44:24.384225 2824 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.4.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054.1.0-a-c31d97b133&limit=500&resourceVersion=0": dial tcp 10.200.4.10:6443: connect: connection refused Sep 4 17:44:24.384270 kubelet[2824]: E0904 17:44:24.384275 2824 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.4.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054.1.0-a-c31d97b133&limit=500&resourceVersion=0": dial tcp 10.200.4.10:6443: connect: connection refused Sep 4 17:44:25.961201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1182845090.mount: Deactivated successfully. Sep 4 17:44:26.037354 containerd[1715]: time="2024-09-04T17:44:26.037295011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:44:26.040350 containerd[1715]: time="2024-09-04T17:44:26.040279670Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Sep 4 17:44:26.043613 containerd[1715]: time="2024-09-04T17:44:26.043563735Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:44:26.046754 containerd[1715]: time="2024-09-04T17:44:26.046714598Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:44:26.050108 containerd[1715]: time="2024-09-04T17:44:26.050056264Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:44:26.053365 containerd[1715]: time="2024-09-04T17:44:26.053312229Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:44:26.055918 containerd[1715]: time="2024-09-04T17:44:26.055654176Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:44:26.061270 containerd[1715]: time="2024-09-04T17:44:26.061234287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:44:26.062013 containerd[1715]: time="2024-09-04T17:44:26.061976901Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 4.83329658s" Sep 4 17:44:26.064082 containerd[1715]: time="2024-09-04T17:44:26.064049043Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 4.838889207s" Sep 4 17:44:26.064596 containerd[1715]: time="2024-09-04T17:44:26.064565453Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 4.845103457s" Sep 4 17:44:26.900784 containerd[1715]: time="2024-09-04T17:44:26.899545852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:44:26.900784 containerd[1715]: time="2024-09-04T17:44:26.899606754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:44:26.900784 containerd[1715]: time="2024-09-04T17:44:26.899629354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:44:26.900784 containerd[1715]: time="2024-09-04T17:44:26.899716056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:44:26.902259 kubelet[2824]: E0904 17:44:26.901876 2824 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.4.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.4.10:6443: connect: connection refused Sep 4 17:44:26.904523 containerd[1715]: time="2024-09-04T17:44:26.902860918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:44:26.904523 containerd[1715]: time="2024-09-04T17:44:26.903739236Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:44:26.904523 containerd[1715]: time="2024-09-04T17:44:26.903818637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:44:26.904523 containerd[1715]: time="2024-09-04T17:44:26.903994241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:44:26.907924 containerd[1715]: time="2024-09-04T17:44:26.907836817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:44:26.907924 containerd[1715]: time="2024-09-04T17:44:26.907897118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:44:26.908096 containerd[1715]: time="2024-09-04T17:44:26.907917219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:44:26.908096 containerd[1715]: time="2024-09-04T17:44:26.908030521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:44:26.940167 systemd[1]: Started cri-containerd-00288ef8cac7098bdb9637717f7a5d2e03cf30d47bc80063e34391e5ee5e1587.scope - libcontainer container 00288ef8cac7098bdb9637717f7a5d2e03cf30d47bc80063e34391e5ee5e1587. Sep 4 17:44:26.942627 systemd[1]: Started cri-containerd-bf3fe3cba75884f83a2b94860d6e808bd40dee402257568ea7fc3c33071cc48a.scope - libcontainer container bf3fe3cba75884f83a2b94860d6e808bd40dee402257568ea7fc3c33071cc48a. Sep 4 17:44:26.944685 systemd[1]: Started cri-containerd-ffd87eebced97d23ffcb1bd9a282317796e595b7f0932de47d11cbf673a5eced.scope - libcontainer container ffd87eebced97d23ffcb1bd9a282317796e595b7f0932de47d11cbf673a5eced. Sep 4 17:44:26.952238 kubelet[2824]: E0904 17:44:26.952157 2824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054.1.0-a-c31d97b133?timeout=10s\": dial tcp 10.200.4.10:6443: connect: connection refused" interval="6.4s" Sep 4 17:44:27.009158 kubelet[2824]: W0904 17:44:27.009113 2824 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.4.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.10:6443: connect: connection refused Sep 4 17:44:27.009340 kubelet[2824]: E0904 17:44:27.009173 2824 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.4.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.10:6443: connect: connection refused Sep 4 17:44:27.029866 containerd[1715]: time="2024-09-04T17:44:27.029757041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4054.1.0-a-c31d97b133,Uid:9e7c5292bae8365787e56580c0f5de5d,Namespace:kube-system,Attempt:0,} returns sandbox id \"00288ef8cac7098bdb9637717f7a5d2e03cf30d47bc80063e34391e5ee5e1587\"" Sep 4 17:44:27.030263 containerd[1715]: time="2024-09-04T17:44:27.030176449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4054.1.0-a-c31d97b133,Uid:a0c24a1de05ee65e1c058628873d5174,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf3fe3cba75884f83a2b94860d6e808bd40dee402257568ea7fc3c33071cc48a\"" Sep 4 17:44:27.038103 containerd[1715]: time="2024-09-04T17:44:27.038026505Z" level=info msg="CreateContainer within sandbox \"bf3fe3cba75884f83a2b94860d6e808bd40dee402257568ea7fc3c33071cc48a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:44:27.038631 containerd[1715]: time="2024-09-04T17:44:27.038026505Z" level=info msg="CreateContainer within sandbox \"00288ef8cac7098bdb9637717f7a5d2e03cf30d47bc80063e34391e5ee5e1587\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:44:27.044220 containerd[1715]: time="2024-09-04T17:44:27.044123027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4054.1.0-a-c31d97b133,Uid:54b67807e5d18fd51c4c077fe34d7d7e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffd87eebced97d23ffcb1bd9a282317796e595b7f0932de47d11cbf673a5eced\"" Sep 4 17:44:27.046846 containerd[1715]: time="2024-09-04T17:44:27.046801880Z" level=info msg="CreateContainer within sandbox \"ffd87eebced97d23ffcb1bd9a282317796e595b7f0932de47d11cbf673a5eced\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:44:27.053495 kubelet[2824]: I0904 17:44:27.053462 2824 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054.1.0-a-c31d97b133" Sep 4 17:44:27.054011 kubelet[2824]: E0904 17:44:27.053927 2824 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.4.10:6443/api/v1/nodes\": dial tcp 10.200.4.10:6443: connect: connection refused" node="ci-4054.1.0-a-c31d97b133" Sep 4 17:44:27.110732 containerd[1715]: time="2024-09-04T17:44:27.110680750Z" level=info msg="CreateContainer within sandbox \"00288ef8cac7098bdb9637717f7a5d2e03cf30d47bc80063e34391e5ee5e1587\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0ab356ff88088e3ab2cd4fbc29b80667afc928e1b72a733b81286940fe953f8f\"" Sep 4 17:44:27.115510 containerd[1715]: time="2024-09-04T17:44:27.115462045Z" level=info msg="CreateContainer within sandbox \"ffd87eebced97d23ffcb1bd9a282317796e595b7f0932de47d11cbf673a5eced\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6b651336ad23c7898f49dfa556b46071111025f4cc37583e0041b2a945ba33db\"" Sep 4 17:44:27.115875 containerd[1715]: time="2024-09-04T17:44:27.115756551Z" level=info msg="StartContainer for \"0ab356ff88088e3ab2cd4fbc29b80667afc928e1b72a733b81286940fe953f8f\"" Sep 4 17:44:27.118301 containerd[1715]: time="2024-09-04T17:44:27.116597467Z" level=info msg="CreateContainer within sandbox \"bf3fe3cba75884f83a2b94860d6e808bd40dee402257568ea7fc3c33071cc48a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cfed777a6ee04b7654b334ff4138a698d775bcf31b597c8488c4a2a7d1d508e0\"" Sep 4 17:44:27.118301 containerd[1715]: time="2024-09-04T17:44:27.116781571Z" level=info msg="StartContainer for \"6b651336ad23c7898f49dfa556b46071111025f4cc37583e0041b2a945ba33db\"" Sep 4 17:44:27.128748 containerd[1715]: time="2024-09-04T17:44:27.128698408Z" level=info msg="StartContainer for \"cfed777a6ee04b7654b334ff4138a698d775bcf31b597c8488c4a2a7d1d508e0\"" Sep 4 17:44:27.159161 systemd[1]: Started cri-containerd-0ab356ff88088e3ab2cd4fbc29b80667afc928e1b72a733b81286940fe953f8f.scope - libcontainer container 0ab356ff88088e3ab2cd4fbc29b80667afc928e1b72a733b81286940fe953f8f. Sep 4 17:44:27.160515 systemd[1]: Started cri-containerd-6b651336ad23c7898f49dfa556b46071111025f4cc37583e0041b2a945ba33db.scope - libcontainer container 6b651336ad23c7898f49dfa556b46071111025f4cc37583e0041b2a945ba33db. Sep 4 17:44:27.187132 systemd[1]: Started cri-containerd-cfed777a6ee04b7654b334ff4138a698d775bcf31b597c8488c4a2a7d1d508e0.scope - libcontainer container cfed777a6ee04b7654b334ff4138a698d775bcf31b597c8488c4a2a7d1d508e0. Sep 4 17:44:27.263777 containerd[1715]: time="2024-09-04T17:44:27.263235383Z" level=info msg="StartContainer for \"6b651336ad23c7898f49dfa556b46071111025f4cc37583e0041b2a945ba33db\" returns successfully" Sep 4 17:44:27.279021 containerd[1715]: time="2024-09-04T17:44:27.278977096Z" level=info msg="StartContainer for \"0ab356ff88088e3ab2cd4fbc29b80667afc928e1b72a733b81286940fe953f8f\" returns successfully" Sep 4 17:44:27.279954 containerd[1715]: time="2024-09-04T17:44:27.279486106Z" level=info msg="StartContainer for \"cfed777a6ee04b7654b334ff4138a698d775bcf31b597c8488c4a2a7d1d508e0\" returns successfully" Sep 4 17:44:29.507232 kubelet[2824]: E0904 17:44:29.507074 2824 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4054.1.0-a-c31d97b133" not found Sep 4 17:44:29.727029 kubelet[2824]: I0904 17:44:29.726982 2824 apiserver.go:52] "Watching apiserver" Sep 4 17:44:29.741561 kubelet[2824]: I0904 17:44:29.741522 2824 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:44:29.868317 kubelet[2824]: E0904 17:44:29.868267 2824 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4054.1.0-a-c31d97b133" not found Sep 4 17:44:30.311865 kubelet[2824]: E0904 17:44:30.311622 2824 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4054.1.0-a-c31d97b133" not found Sep 4 17:44:30.824276 kubelet[2824]: E0904 17:44:30.824202 2824 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4054.1.0-a-c31d97b133\" not found" Sep 4 17:44:31.211617 kubelet[2824]: E0904 17:44:31.211581 2824 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4054.1.0-a-c31d97b133" not found Sep 4 17:44:33.356177 kubelet[2824]: E0904 17:44:33.356127 2824 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4054.1.0-a-c31d97b133\" not found" node="ci-4054.1.0-a-c31d97b133" Sep 4 17:44:33.456741 kubelet[2824]: I0904 17:44:33.456691 2824 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054.1.0-a-c31d97b133" Sep 4 17:44:33.462310 kubelet[2824]: I0904 17:44:33.462271 2824 kubelet_node_status.go:76] "Successfully registered node" node="ci-4054.1.0-a-c31d97b133" Sep 4 17:44:33.653590 kubelet[2824]: W0904 17:44:33.653313 2824 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:44:34.035420 systemd[1]: Reloading requested from client PID 3101 ('systemctl') (unit session-9.scope)... Sep 4 17:44:34.035436 systemd[1]: Reloading... Sep 4 17:44:34.125973 zram_generator::config[3135]: No configuration found. Sep 4 17:44:34.292532 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:44:34.385710 systemd[1]: Reloading finished in 349 ms. Sep 4 17:44:34.426480 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:44:34.440191 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:44:34.440446 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:44:34.446235 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:44:34.542233 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:44:34.549928 (kubelet)[3205]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:44:34.613656 kubelet[3205]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:44:34.613656 kubelet[3205]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:44:34.613656 kubelet[3205]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:44:34.613656 kubelet[3205]: I0904 17:44:34.613467 3205 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:44:35.109612 kubelet[3205]: I0904 17:44:35.109575 3205 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 4 17:44:35.109612 kubelet[3205]: I0904 17:44:35.109602 3205 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:44:35.109870 kubelet[3205]: I0904 17:44:35.109849 3205 server.go:919] "Client rotation is on, will bootstrap in background" Sep 4 17:44:35.111270 kubelet[3205]: I0904 17:44:35.111242 3205 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:44:35.113358 kubelet[3205]: I0904 17:44:35.113162 3205 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:44:35.119427 kubelet[3205]: I0904 17:44:35.119402 3205 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:44:35.119680 kubelet[3205]: I0904 17:44:35.119661 3205 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:44:35.119862 kubelet[3205]: I0904 17:44:35.119844 3205 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:44:35.120025 kubelet[3205]: I0904 17:44:35.119873 3205 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:44:35.120025 kubelet[3205]: I0904 17:44:35.119886 3205 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:44:35.120025 kubelet[3205]: I0904 17:44:35.119928 3205 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:44:35.120164 kubelet[3205]: I0904 17:44:35.120057 3205 kubelet.go:396] "Attempting to sync node with API server" Sep 4 17:44:35.120164 kubelet[3205]: I0904 17:44:35.120076 3205 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:44:35.120164 kubelet[3205]: I0904 17:44:35.120107 3205 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:44:35.120164 kubelet[3205]: I0904 17:44:35.120122 3205 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:44:35.122752 kubelet[3205]: I0904 17:44:35.121456 3205 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:44:35.122752 kubelet[3205]: I0904 17:44:35.121645 3205 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:44:35.122752 kubelet[3205]: I0904 17:44:35.122161 3205 server.go:1256] "Started kubelet" Sep 4 17:44:35.125863 kubelet[3205]: I0904 17:44:35.125840 3205 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:44:35.129635 kubelet[3205]: I0904 17:44:35.129613 3205 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:44:35.145304 kubelet[3205]: I0904 17:44:35.145276 3205 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:44:35.145748 kubelet[3205]: I0904 17:44:35.145731 3205 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:44:35.151045 kubelet[3205]: I0904 17:44:35.151023 3205 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:44:35.158438 kubelet[3205]: I0904 17:44:35.158413 3205 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:44:35.158731 kubelet[3205]: I0904 17:44:35.158713 3205 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:44:35.158893 kubelet[3205]: I0904 17:44:35.158877 3205 server.go:461] "Adding debug handlers to kubelet server" Sep 4 17:44:35.164432 kubelet[3205]: I0904 17:44:35.164411 3205 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:44:35.172607 kubelet[3205]: E0904 17:44:35.172565 3205 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:44:35.174431 kubelet[3205]: I0904 17:44:35.173218 3205 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:44:35.174579 kubelet[3205]: I0904 17:44:35.174556 3205 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:44:35.174579 kubelet[3205]: I0904 17:44:35.174580 3205 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:44:35.176167 kubelet[3205]: I0904 17:44:35.176152 3205 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:44:35.176281 kubelet[3205]: I0904 17:44:35.176272 3205 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:44:35.176602 kubelet[3205]: I0904 17:44:35.176349 3205 kubelet.go:2329] "Starting kubelet main sync loop" Sep 4 17:44:35.176602 kubelet[3205]: E0904 17:44:35.176400 3205 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:44:35.256531 kubelet[3205]: I0904 17:44:35.256491 3205 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054.1.0-a-c31d97b133" Sep 4 17:44:35.258128 kubelet[3205]: I0904 17:44:35.258070 3205 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:44:35.258128 kubelet[3205]: I0904 17:44:35.258095 3205 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:44:35.258357 kubelet[3205]: I0904 17:44:35.258281 3205 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:44:35.259946 kubelet[3205]: I0904 17:44:35.259874 3205 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:44:35.259946 kubelet[3205]: I0904 17:44:35.259919 3205 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:44:35.260163 kubelet[3205]: I0904 17:44:35.259928 3205 policy_none.go:49] "None policy: Start" Sep 4 17:44:35.264695 kubelet[3205]: I0904 17:44:35.262868 3205 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:44:35.264695 kubelet[3205]: I0904 17:44:35.262894 3205 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:44:35.264695 kubelet[3205]: I0904 17:44:35.263195 3205 state_mem.go:75] "Updated machine memory state" Sep 4 17:44:35.274307 kubelet[3205]: I0904 17:44:35.273672 3205 kubelet_node_status.go:112] "Node was previously registered" node="ci-4054.1.0-a-c31d97b133" Sep 4 17:44:35.274307 kubelet[3205]: I0904 17:44:35.273846 3205 kubelet_node_status.go:76] "Successfully registered node" node="ci-4054.1.0-a-c31d97b133" Sep 4 17:44:35.277082 kubelet[3205]: I0904 17:44:35.275462 3205 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:44:35.277082 kubelet[3205]: I0904 17:44:35.275726 3205 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:44:35.277082 kubelet[3205]: I0904 17:44:35.276742 3205 topology_manager.go:215] "Topology Admit Handler" podUID="a0c24a1de05ee65e1c058628873d5174" podNamespace="kube-system" podName="kube-apiserver-ci-4054.1.0-a-c31d97b133" Sep 4 17:44:35.277082 kubelet[3205]: I0904 17:44:35.276914 3205 topology_manager.go:215] "Topology Admit Handler" podUID="54b67807e5d18fd51c4c077fe34d7d7e" podNamespace="kube-system" podName="kube-controller-manager-ci-4054.1.0-a-c31d97b133" Sep 4 17:44:35.277082 kubelet[3205]: I0904 17:44:35.277066 3205 topology_manager.go:215] "Topology Admit Handler" podUID="9e7c5292bae8365787e56580c0f5de5d" podNamespace="kube-system" podName="kube-scheduler-ci-4054.1.0-a-c31d97b133" Sep 4 17:44:35.291256 kubelet[3205]: W0904 17:44:35.291234 3205 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:44:35.298128 kubelet[3205]: W0904 17:44:35.298099 3205 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:44:35.299279 kubelet[3205]: W0904 17:44:35.298481 3205 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:44:35.299279 kubelet[3205]: E0904 17:44:35.298550 3205 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4054.1.0-a-c31d97b133\" already exists" pod="kube-system/kube-controller-manager-ci-4054.1.0-a-c31d97b133" Sep 4 17:44:35.362615 kubelet[3205]: I0904 17:44:35.362488 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/54b67807e5d18fd51c4c077fe34d7d7e-k8s-certs\") pod \"kube-controller-manager-ci-4054.1.0-a-c31d97b133\" (UID: \"54b67807e5d18fd51c4c077fe34d7d7e\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-c31d97b133" Sep 4 17:44:35.364889 kubelet[3205]: I0904 17:44:35.363026 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a0c24a1de05ee65e1c058628873d5174-ca-certs\") pod \"kube-apiserver-ci-4054.1.0-a-c31d97b133\" (UID: \"a0c24a1de05ee65e1c058628873d5174\") " pod="kube-system/kube-apiserver-ci-4054.1.0-a-c31d97b133" Sep 4 17:44:35.364889 kubelet[3205]: I0904 17:44:35.363081 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a0c24a1de05ee65e1c058628873d5174-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4054.1.0-a-c31d97b133\" (UID: \"a0c24a1de05ee65e1c058628873d5174\") " pod="kube-system/kube-apiserver-ci-4054.1.0-a-c31d97b133" Sep 4 17:44:35.364889 kubelet[3205]: I0904 17:44:35.363114 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/54b67807e5d18fd51c4c077fe34d7d7e-flexvolume-dir\") pod \"kube-controller-manager-ci-4054.1.0-a-c31d97b133\" (UID: \"54b67807e5d18fd51c4c077fe34d7d7e\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-c31d97b133" Sep 4 17:44:35.364889 kubelet[3205]: I0904 17:44:35.363145 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/54b67807e5d18fd51c4c077fe34d7d7e-kubeconfig\") pod \"kube-controller-manager-ci-4054.1.0-a-c31d97b133\" (UID: \"54b67807e5d18fd51c4c077fe34d7d7e\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-c31d97b133" Sep 4 17:44:35.364889 kubelet[3205]: I0904 17:44:35.363179 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54b67807e5d18fd51c4c077fe34d7d7e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4054.1.0-a-c31d97b133\" (UID: \"54b67807e5d18fd51c4c077fe34d7d7e\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-c31d97b133" Sep 4 17:44:35.365200 kubelet[3205]: I0904 17:44:35.363209 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9e7c5292bae8365787e56580c0f5de5d-kubeconfig\") pod \"kube-scheduler-ci-4054.1.0-a-c31d97b133\" (UID: \"9e7c5292bae8365787e56580c0f5de5d\") " pod="kube-system/kube-scheduler-ci-4054.1.0-a-c31d97b133" Sep 4 17:44:35.365200 kubelet[3205]: I0904 17:44:35.363238 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a0c24a1de05ee65e1c058628873d5174-k8s-certs\") pod \"kube-apiserver-ci-4054.1.0-a-c31d97b133\" (UID: \"a0c24a1de05ee65e1c058628873d5174\") " pod="kube-system/kube-apiserver-ci-4054.1.0-a-c31d97b133" Sep 4 17:44:35.365200 kubelet[3205]: I0904 17:44:35.363287 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/54b67807e5d18fd51c4c077fe34d7d7e-ca-certs\") pod \"kube-controller-manager-ci-4054.1.0-a-c31d97b133\" (UID: \"54b67807e5d18fd51c4c077fe34d7d7e\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-c31d97b133" Sep 4 17:44:36.123615 kubelet[3205]: I0904 17:44:36.123563 3205 apiserver.go:52] "Watching apiserver" Sep 4 17:44:36.159147 kubelet[3205]: I0904 17:44:36.158995 3205 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:44:36.261033 kubelet[3205]: I0904 17:44:36.260841 3205 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4054.1.0-a-c31d97b133" podStartSLOduration=1.260786535 podStartE2EDuration="1.260786535s" podCreationTimestamp="2024-09-04 17:44:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:44:36.260525829 +0000 UTC m=+1.704691992" watchObservedRunningTime="2024-09-04 17:44:36.260786535 +0000 UTC m=+1.704952798" Sep 4 17:44:36.300636 kubelet[3205]: I0904 17:44:36.300590 3205 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4054.1.0-a-c31d97b133" podStartSLOduration=1.300538103 podStartE2EDuration="1.300538103s" podCreationTimestamp="2024-09-04 17:44:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:44:36.279264985 +0000 UTC m=+1.723431148" watchObservedRunningTime="2024-09-04 17:44:36.300538103 +0000 UTC m=+1.744704266" Sep 4 17:44:39.801521 sudo[2242]: pam_unix(sudo:session): session closed for user root Sep 4 17:44:39.895594 sshd[2239]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:39.898869 systemd[1]: sshd@6-10.200.4.10:22-10.200.16.10:47940.service: Deactivated successfully. Sep 4 17:44:39.901009 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:44:39.901234 systemd[1]: session-9.scope: Consumed 4.483s CPU time, 139.7M memory peak, 0B memory swap peak. Sep 4 17:44:39.902956 systemd-logind[1689]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:44:39.904060 systemd-logind[1689]: Removed session 9. Sep 4 17:44:40.979436 kubelet[3205]: I0904 17:44:40.979335 3205 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4054.1.0-a-c31d97b133" podStartSLOduration=7.979296576 podStartE2EDuration="7.979296576s" podCreationTimestamp="2024-09-04 17:44:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:44:36.332547282 +0000 UTC m=+1.776713445" watchObservedRunningTime="2024-09-04 17:44:40.979296576 +0000 UTC m=+6.423462839" Sep 4 17:44:45.856502 kubelet[3205]: I0904 17:44:45.856470 3205 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:44:45.857124 kubelet[3205]: I0904 17:44:45.857069 3205 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:44:45.857186 containerd[1715]: time="2024-09-04T17:44:45.856855094Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:44:46.688779 kubelet[3205]: I0904 17:44:46.688726 3205 topology_manager.go:215] "Topology Admit Handler" podUID="f9919661-15e5-462b-a405-151b5e943eab" podNamespace="kube-system" podName="kube-proxy-n2x9c" Sep 4 17:44:46.700782 systemd[1]: Created slice kubepods-besteffort-podf9919661_15e5_462b_a405_151b5e943eab.slice - libcontainer container kubepods-besteffort-podf9919661_15e5_462b_a405_151b5e943eab.slice. Sep 4 17:44:46.739587 kubelet[3205]: I0904 17:44:46.739345 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f9919661-15e5-462b-a405-151b5e943eab-kube-proxy\") pod \"kube-proxy-n2x9c\" (UID: \"f9919661-15e5-462b-a405-151b5e943eab\") " pod="kube-system/kube-proxy-n2x9c" Sep 4 17:44:46.739587 kubelet[3205]: I0904 17:44:46.739408 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9919661-15e5-462b-a405-151b5e943eab-lib-modules\") pod \"kube-proxy-n2x9c\" (UID: \"f9919661-15e5-462b-a405-151b5e943eab\") " pod="kube-system/kube-proxy-n2x9c" Sep 4 17:44:46.739587 kubelet[3205]: I0904 17:44:46.739439 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9919661-15e5-462b-a405-151b5e943eab-xtables-lock\") pod \"kube-proxy-n2x9c\" (UID: \"f9919661-15e5-462b-a405-151b5e943eab\") " pod="kube-system/kube-proxy-n2x9c" Sep 4 17:44:46.739587 kubelet[3205]: I0904 17:44:46.739476 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kxpg\" (UniqueName: \"kubernetes.io/projected/f9919661-15e5-462b-a405-151b5e943eab-kube-api-access-2kxpg\") pod \"kube-proxy-n2x9c\" (UID: \"f9919661-15e5-462b-a405-151b5e943eab\") " pod="kube-system/kube-proxy-n2x9c" Sep 4 17:44:46.910970 kubelet[3205]: I0904 17:44:46.910683 3205 topology_manager.go:215] "Topology Admit Handler" podUID="087cd54a-315e-481d-bd00-0ad293b49c0e" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-kvlhl" Sep 4 17:44:46.913884 kubelet[3205]: W0904 17:44:46.913356 3205 reflector.go:539] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4054.1.0-a-c31d97b133" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4054.1.0-a-c31d97b133' and this object Sep 4 17:44:46.913884 kubelet[3205]: E0904 17:44:46.913395 3205 reflector.go:147] object-"tigera-operator"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4054.1.0-a-c31d97b133" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4054.1.0-a-c31d97b133' and this object Sep 4 17:44:46.924474 systemd[1]: Created slice kubepods-besteffort-pod087cd54a_315e_481d_bd00_0ad293b49c0e.slice - libcontainer container kubepods-besteffort-pod087cd54a_315e_481d_bd00_0ad293b49c0e.slice. Sep 4 17:44:46.940779 kubelet[3205]: I0904 17:44:46.940686 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/087cd54a-315e-481d-bd00-0ad293b49c0e-var-lib-calico\") pod \"tigera-operator-5d56685c77-kvlhl\" (UID: \"087cd54a-315e-481d-bd00-0ad293b49c0e\") " pod="tigera-operator/tigera-operator-5d56685c77-kvlhl" Sep 4 17:44:46.940779 kubelet[3205]: I0904 17:44:46.940769 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsvgt\" (UniqueName: \"kubernetes.io/projected/087cd54a-315e-481d-bd00-0ad293b49c0e-kube-api-access-xsvgt\") pod \"tigera-operator-5d56685c77-kvlhl\" (UID: \"087cd54a-315e-481d-bd00-0ad293b49c0e\") " pod="tigera-operator/tigera-operator-5d56685c77-kvlhl" Sep 4 17:44:47.012921 containerd[1715]: time="2024-09-04T17:44:47.012876297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n2x9c,Uid:f9919661-15e5-462b-a405-151b5e943eab,Namespace:kube-system,Attempt:0,}" Sep 4 17:44:47.059993 containerd[1715]: time="2024-09-04T17:44:47.056989339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:44:47.059993 containerd[1715]: time="2024-09-04T17:44:47.057090741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:44:47.059993 containerd[1715]: time="2024-09-04T17:44:47.057105742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:44:47.059993 containerd[1715]: time="2024-09-04T17:44:47.057269346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:44:47.086067 systemd[1]: Started cri-containerd-b1d60b5714d56d2d5d8916f6af553334765174676127bde3c500d0f734f0a076.scope - libcontainer container b1d60b5714d56d2d5d8916f6af553334765174676127bde3c500d0f734f0a076. Sep 4 17:44:47.107373 containerd[1715]: time="2024-09-04T17:44:47.107305827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n2x9c,Uid:f9919661-15e5-462b-a405-151b5e943eab,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1d60b5714d56d2d5d8916f6af553334765174676127bde3c500d0f734f0a076\"" Sep 4 17:44:47.110112 containerd[1715]: time="2024-09-04T17:44:47.110073592Z" level=info msg="CreateContainer within sandbox \"b1d60b5714d56d2d5d8916f6af553334765174676127bde3c500d0f734f0a076\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:44:47.146050 containerd[1715]: time="2024-09-04T17:44:47.146005041Z" level=info msg="CreateContainer within sandbox \"b1d60b5714d56d2d5d8916f6af553334765174676127bde3c500d0f734f0a076\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"202afe7ee1d4800757cc1aff923555b7c77067582175cd43e7f29e3c2b2a7184\"" Sep 4 17:44:47.146689 containerd[1715]: time="2024-09-04T17:44:47.146616955Z" level=info msg="StartContainer for \"202afe7ee1d4800757cc1aff923555b7c77067582175cd43e7f29e3c2b2a7184\"" Sep 4 17:44:47.174536 systemd[1]: Started cri-containerd-202afe7ee1d4800757cc1aff923555b7c77067582175cd43e7f29e3c2b2a7184.scope - libcontainer container 202afe7ee1d4800757cc1aff923555b7c77067582175cd43e7f29e3c2b2a7184. Sep 4 17:44:47.212596 containerd[1715]: time="2024-09-04T17:44:47.212427209Z" level=info msg="StartContainer for \"202afe7ee1d4800757cc1aff923555b7c77067582175cd43e7f29e3c2b2a7184\" returns successfully" Sep 4 17:44:47.230497 containerd[1715]: time="2024-09-04T17:44:47.230447234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-kvlhl,Uid:087cd54a-315e-481d-bd00-0ad293b49c0e,Namespace:tigera-operator,Attempt:0,}" Sep 4 17:44:47.289841 containerd[1715]: time="2024-09-04T17:44:47.289490928Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:44:47.290942 containerd[1715]: time="2024-09-04T17:44:47.290527953Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:44:47.290942 containerd[1715]: time="2024-09-04T17:44:47.290554653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:44:47.290942 containerd[1715]: time="2024-09-04T17:44:47.290638555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:44:47.312580 systemd[1]: Started cri-containerd-f500abea2d6c1bd675ec411c30f479ec22b8ed9716cf34320248b9415345cbe2.scope - libcontainer container f500abea2d6c1bd675ec411c30f479ec22b8ed9716cf34320248b9415345cbe2. Sep 4 17:44:47.350232 containerd[1715]: time="2024-09-04T17:44:47.350138060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-kvlhl,Uid:087cd54a-315e-481d-bd00-0ad293b49c0e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f500abea2d6c1bd675ec411c30f479ec22b8ed9716cf34320248b9415345cbe2\"" Sep 4 17:44:47.352212 containerd[1715]: time="2024-09-04T17:44:47.351990504Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 17:44:48.940446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1828369632.mount: Deactivated successfully. Sep 4 17:44:49.494675 containerd[1715]: time="2024-09-04T17:44:49.494613192Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:49.496610 containerd[1715]: time="2024-09-04T17:44:49.496538137Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136521" Sep 4 17:44:49.499870 containerd[1715]: time="2024-09-04T17:44:49.499806414Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:49.504662 containerd[1715]: time="2024-09-04T17:44:49.504613228Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:49.505552 containerd[1715]: time="2024-09-04T17:44:49.505370346Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.15334144s" Sep 4 17:44:49.505552 containerd[1715]: time="2024-09-04T17:44:49.505409047Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Sep 4 17:44:49.508212 containerd[1715]: time="2024-09-04T17:44:49.508178512Z" level=info msg="CreateContainer within sandbox \"f500abea2d6c1bd675ec411c30f479ec22b8ed9716cf34320248b9415345cbe2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 17:44:49.546756 containerd[1715]: time="2024-09-04T17:44:49.546716122Z" level=info msg="CreateContainer within sandbox \"f500abea2d6c1bd675ec411c30f479ec22b8ed9716cf34320248b9415345cbe2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f970e6cad892c791bbd63fe4b1fcea3e0fb18ec5f124b61c716fd5fa87018a3c\"" Sep 4 17:44:49.548721 containerd[1715]: time="2024-09-04T17:44:49.547292436Z" level=info msg="StartContainer for \"f970e6cad892c791bbd63fe4b1fcea3e0fb18ec5f124b61c716fd5fa87018a3c\"" Sep 4 17:44:49.575086 systemd[1]: Started cri-containerd-f970e6cad892c791bbd63fe4b1fcea3e0fb18ec5f124b61c716fd5fa87018a3c.scope - libcontainer container f970e6cad892c791bbd63fe4b1fcea3e0fb18ec5f124b61c716fd5fa87018a3c. Sep 4 17:44:49.601726 containerd[1715]: time="2024-09-04T17:44:49.601607218Z" level=info msg="StartContainer for \"f970e6cad892c791bbd63fe4b1fcea3e0fb18ec5f124b61c716fd5fa87018a3c\" returns successfully" Sep 4 17:44:50.261033 kubelet[3205]: I0904 17:44:50.260964 3205 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-n2x9c" podStartSLOduration=4.260899784 podStartE2EDuration="4.260899784s" podCreationTimestamp="2024-09-04 17:44:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:44:47.258479196 +0000 UTC m=+12.702645359" watchObservedRunningTime="2024-09-04 17:44:50.260899784 +0000 UTC m=+15.705065947" Sep 4 17:44:52.628221 kubelet[3205]: I0904 17:44:52.628175 3205 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-kvlhl" podStartSLOduration=4.473475503 podStartE2EDuration="6.628101174s" podCreationTimestamp="2024-09-04 17:44:46 +0000 UTC" firstStartedPulling="2024-09-04 17:44:47.351323288 +0000 UTC m=+12.795527452" lastFinishedPulling="2024-09-04 17:44:49.50598696 +0000 UTC m=+14.950153123" observedRunningTime="2024-09-04 17:44:50.261288093 +0000 UTC m=+15.705454256" watchObservedRunningTime="2024-09-04 17:44:52.628101174 +0000 UTC m=+18.072267337" Sep 4 17:44:52.629195 kubelet[3205]: I0904 17:44:52.629159 3205 topology_manager.go:215] "Topology Admit Handler" podUID="b94663c4-0cb9-4504-aedc-c25d80866dcb" podNamespace="calico-system" podName="calico-typha-d578657f8-q58fm" Sep 4 17:44:52.641541 systemd[1]: Created slice kubepods-besteffort-podb94663c4_0cb9_4504_aedc_c25d80866dcb.slice - libcontainer container kubepods-besteffort-podb94663c4_0cb9_4504_aedc_c25d80866dcb.slice. Sep 4 17:44:52.680215 kubelet[3205]: I0904 17:44:52.680063 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b94663c4-0cb9-4504-aedc-c25d80866dcb-tigera-ca-bundle\") pod \"calico-typha-d578657f8-q58fm\" (UID: \"b94663c4-0cb9-4504-aedc-c25d80866dcb\") " pod="calico-system/calico-typha-d578657f8-q58fm" Sep 4 17:44:52.681763 kubelet[3205]: I0904 17:44:52.681742 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b94663c4-0cb9-4504-aedc-c25d80866dcb-typha-certs\") pod \"calico-typha-d578657f8-q58fm\" (UID: \"b94663c4-0cb9-4504-aedc-c25d80866dcb\") " pod="calico-system/calico-typha-d578657f8-q58fm" Sep 4 17:44:52.682000 kubelet[3205]: I0904 17:44:52.681985 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfsdj\" (UniqueName: \"kubernetes.io/projected/b94663c4-0cb9-4504-aedc-c25d80866dcb-kube-api-access-nfsdj\") pod \"calico-typha-d578657f8-q58fm\" (UID: \"b94663c4-0cb9-4504-aedc-c25d80866dcb\") " pod="calico-system/calico-typha-d578657f8-q58fm" Sep 4 17:44:52.753945 kubelet[3205]: I0904 17:44:52.753896 3205 topology_manager.go:215] "Topology Admit Handler" podUID="a9034484-3100-4d3a-8734-39d62992ecbc" podNamespace="calico-system" podName="calico-node-pg9qn" Sep 4 17:44:52.759489 kubelet[3205]: W0904 17:44:52.759463 3205 reflector.go:539] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ci-4054.1.0-a-c31d97b133" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4054.1.0-a-c31d97b133' and this object Sep 4 17:44:52.759693 kubelet[3205]: E0904 17:44:52.759666 3205 reflector.go:147] object-"calico-system"/"node-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ci-4054.1.0-a-c31d97b133" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4054.1.0-a-c31d97b133' and this object Sep 4 17:44:52.759991 kubelet[3205]: W0904 17:44:52.759948 3205 reflector.go:539] object-"calico-system"/"cni-config": failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:ci-4054.1.0-a-c31d97b133" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4054.1.0-a-c31d97b133' and this object Sep 4 17:44:52.759991 kubelet[3205]: E0904 17:44:52.759973 3205 reflector.go:147] object-"calico-system"/"cni-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:ci-4054.1.0-a-c31d97b133" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4054.1.0-a-c31d97b133' and this object Sep 4 17:44:52.766165 systemd[1]: Created slice kubepods-besteffort-poda9034484_3100_4d3a_8734_39d62992ecbc.slice - libcontainer container kubepods-besteffort-poda9034484_3100_4d3a_8734_39d62992ecbc.slice. Sep 4 17:44:52.784827 kubelet[3205]: I0904 17:44:52.782690 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a9034484-3100-4d3a-8734-39d62992ecbc-cni-log-dir\") pod \"calico-node-pg9qn\" (UID: \"a9034484-3100-4d3a-8734-39d62992ecbc\") " pod="calico-system/calico-node-pg9qn" Sep 4 17:44:52.784827 kubelet[3205]: I0904 17:44:52.782755 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9m8t\" (UniqueName: \"kubernetes.io/projected/a9034484-3100-4d3a-8734-39d62992ecbc-kube-api-access-l9m8t\") pod \"calico-node-pg9qn\" (UID: \"a9034484-3100-4d3a-8734-39d62992ecbc\") " pod="calico-system/calico-node-pg9qn" Sep 4 17:44:52.784827 kubelet[3205]: I0904 17:44:52.782790 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9034484-3100-4d3a-8734-39d62992ecbc-xtables-lock\") pod \"calico-node-pg9qn\" (UID: \"a9034484-3100-4d3a-8734-39d62992ecbc\") " pod="calico-system/calico-node-pg9qn" Sep 4 17:44:52.784827 kubelet[3205]: I0904 17:44:52.782817 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a9034484-3100-4d3a-8734-39d62992ecbc-node-certs\") pod \"calico-node-pg9qn\" (UID: \"a9034484-3100-4d3a-8734-39d62992ecbc\") " pod="calico-system/calico-node-pg9qn" Sep 4 17:44:52.784827 kubelet[3205]: I0904 17:44:52.782853 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a9034484-3100-4d3a-8734-39d62992ecbc-cni-net-dir\") pod \"calico-node-pg9qn\" (UID: \"a9034484-3100-4d3a-8734-39d62992ecbc\") " pod="calico-system/calico-node-pg9qn" Sep 4 17:44:52.785171 kubelet[3205]: I0904 17:44:52.782881 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9034484-3100-4d3a-8734-39d62992ecbc-tigera-ca-bundle\") pod \"calico-node-pg9qn\" (UID: \"a9034484-3100-4d3a-8734-39d62992ecbc\") " pod="calico-system/calico-node-pg9qn" Sep 4 17:44:52.785171 kubelet[3205]: I0904 17:44:52.782911 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a9034484-3100-4d3a-8734-39d62992ecbc-var-run-calico\") pod \"calico-node-pg9qn\" (UID: \"a9034484-3100-4d3a-8734-39d62992ecbc\") " pod="calico-system/calico-node-pg9qn" Sep 4 17:44:52.785171 kubelet[3205]: I0904 17:44:52.782949 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a9034484-3100-4d3a-8734-39d62992ecbc-cni-bin-dir\") pod \"calico-node-pg9qn\" (UID: \"a9034484-3100-4d3a-8734-39d62992ecbc\") " pod="calico-system/calico-node-pg9qn" Sep 4 17:44:52.785171 kubelet[3205]: I0904 17:44:52.782985 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9034484-3100-4d3a-8734-39d62992ecbc-lib-modules\") pod \"calico-node-pg9qn\" (UID: \"a9034484-3100-4d3a-8734-39d62992ecbc\") " pod="calico-system/calico-node-pg9qn" Sep 4 17:44:52.785171 kubelet[3205]: I0904 17:44:52.783014 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a9034484-3100-4d3a-8734-39d62992ecbc-policysync\") pod \"calico-node-pg9qn\" (UID: \"a9034484-3100-4d3a-8734-39d62992ecbc\") " pod="calico-system/calico-node-pg9qn" Sep 4 17:44:52.785371 kubelet[3205]: I0904 17:44:52.783046 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a9034484-3100-4d3a-8734-39d62992ecbc-var-lib-calico\") pod \"calico-node-pg9qn\" (UID: \"a9034484-3100-4d3a-8734-39d62992ecbc\") " pod="calico-system/calico-node-pg9qn" Sep 4 17:44:52.785371 kubelet[3205]: I0904 17:44:52.783074 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a9034484-3100-4d3a-8734-39d62992ecbc-flexvol-driver-host\") pod \"calico-node-pg9qn\" (UID: \"a9034484-3100-4d3a-8734-39d62992ecbc\") " pod="calico-system/calico-node-pg9qn" Sep 4 17:44:52.887271 kubelet[3205]: E0904 17:44:52.887156 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:52.887436 kubelet[3205]: W0904 17:44:52.887418 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:52.887549 kubelet[3205]: E0904 17:44:52.887537 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:52.909277 kubelet[3205]: I0904 17:44:52.909248 3205 topology_manager.go:215] "Topology Admit Handler" podUID="be0ad617-90e9-4ff4-80e2-c29502c9b417" podNamespace="calico-system" podName="csi-node-driver-cm46f" Sep 4 17:44:52.911560 kubelet[3205]: E0904 17:44:52.911388 3205 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cm46f" podUID="be0ad617-90e9-4ff4-80e2-c29502c9b417" Sep 4 17:44:52.912768 kubelet[3205]: E0904 17:44:52.909366 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:52.912768 kubelet[3205]: W0904 17:44:52.912707 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:52.912768 kubelet[3205]: E0904 17:44:52.912731 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:52.949254 containerd[1715]: time="2024-09-04T17:44:52.949203355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d578657f8-q58fm,Uid:b94663c4-0cb9-4504-aedc-c25d80866dcb,Namespace:calico-system,Attempt:0,}" Sep 4 17:44:52.962885 kubelet[3205]: E0904 17:44:52.962853 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:52.962885 kubelet[3205]: W0904 17:44:52.962878 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:52.963247 kubelet[3205]: E0904 17:44:52.962906 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:52.963247 kubelet[3205]: E0904 17:44:52.963196 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:52.964005 kubelet[3205]: W0904 17:44:52.963975 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:52.964103 kubelet[3205]: E0904 17:44:52.964025 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:52.964319 kubelet[3205]: E0904 17:44:52.964301 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:52.964319 kubelet[3205]: W0904 17:44:52.964317 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:52.964463 kubelet[3205]: E0904 17:44:52.964333 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:52.964576 kubelet[3205]: E0904 17:44:52.964562 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:52.964650 kubelet[3205]: W0904 17:44:52.964577 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:52.964650 kubelet[3205]: E0904 17:44:52.964608 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:52.964861 kubelet[3205]: E0904 17:44:52.964846 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:52.964939 kubelet[3205]: W0904 17:44:52.964862 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:52.964939 kubelet[3205]: E0904 17:44:52.964879 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:52.965145 kubelet[3205]: E0904 17:44:52.965114 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:52.965145 kubelet[3205]: W0904 17:44:52.965126 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:52.965145 kubelet[3205]: E0904 17:44:52.965141 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:52.965410 kubelet[3205]: E0904 17:44:52.965362 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:52.965410 kubelet[3205]: W0904 17:44:52.965373 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:52.965410 kubelet[3205]: E0904 17:44:52.965401 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:52.965691 kubelet[3205]: E0904 17:44:52.965675 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:52.965691 kubelet[3205]: W0904 17:44:52.965690 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:52.965822 kubelet[3205]: E0904 17:44:52.965705 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:52.966056 kubelet[3205]: E0904 17:44:52.966016 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:52.966056 kubelet[3205]: W0904 17:44:52.966027 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:52.967028 kubelet[3205]: E0904 17:44:52.966984 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:52.967263 kubelet[3205]: E0904 17:44:52.967247 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:52.967263 kubelet[3205]: W0904 17:44:52.967263 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:52.967413 kubelet[3205]: E0904 17:44:52.967279 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:52.967519 kubelet[3205]: E0904 17:44:52.967498 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:52.967519 kubelet[3205]: W0904 17:44:52.967515 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:52.967714 kubelet[3205]: E0904 17:44:52.967531 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:52.967862 kubelet[3205]: E0904 17:44:52.967765 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:52.967862 kubelet[3205]: W0904 17:44:52.967789 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:52.967862 kubelet[3205]: E0904 17:44:52.967808 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:52.968422 kubelet[3205]: E0904 17:44:52.968049 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:52.968422 kubelet[3205]: W0904 17:44:52.968060 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:52.968422 kubelet[3205]: E0904 17:44:52.968076 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:52.969074 kubelet[3205]: E0904 17:44:52.968586 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:52.969074 kubelet[3205]: W0904 17:44:52.968598 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:52.969074 kubelet[3205]: E0904 17:44:52.968615 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:52.969074 kubelet[3205]: E0904 17:44:52.968832 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:52.969074 kubelet[3205]: W0904 17:44:52.968843 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:52.969074 kubelet[3205]: E0904 17:44:52.968858 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:52.969882 kubelet[3205]: E0904 17:44:52.969840 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:52.969882 kubelet[3205]: W0904 17:44:52.969858 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:52.970581 kubelet[3205]: E0904 17:44:52.969908 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:52.970581 kubelet[3205]: E0904 17:44:52.970166 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:52.970581 kubelet[3205]: W0904 17:44:52.970178 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:52.970581 kubelet[3205]: E0904 17:44:52.970194 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:52.970581 kubelet[3205]: E0904 17:44:52.970439 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:52.970581 kubelet[3205]: W0904 17:44:52.970450 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:52.970581 kubelet[3205]: E0904 17:44:52.970489 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:52.973198 kubelet[3205]: E0904 17:44:52.970829 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:52.973198 kubelet[3205]: W0904 17:44:52.970840 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:52.973198 kubelet[3205]: E0904 17:44:52.970863 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:52.973198 kubelet[3205]: E0904 17:44:52.971474 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:52.973198 kubelet[3205]: W0904 17:44:52.971486 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:52.973198 kubelet[3205]: E0904 17:44:52.971502 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:52.992126 kubelet[3205]: E0904 17:44:52.992072 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:52.992577 kubelet[3205]: W0904 17:44:52.992290 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:52.992577 kubelet[3205]: E0904 17:44:52.992324 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:52.992676 kubelet[3205]: E0904 17:44:52.992607 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:52.992676 kubelet[3205]: W0904 17:44:52.992620 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:52.992676 kubelet[3205]: E0904 17:44:52.992644 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:52.992798 kubelet[3205]: I0904 17:44:52.992710 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhjmc\" (UniqueName: \"kubernetes.io/projected/be0ad617-90e9-4ff4-80e2-c29502c9b417-kube-api-access-fhjmc\") pod \"csi-node-driver-cm46f\" (UID: \"be0ad617-90e9-4ff4-80e2-c29502c9b417\") " pod="calico-system/csi-node-driver-cm46f" Sep 4 17:44:52.995382 kubelet[3205]: E0904 17:44:52.993000 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:52.995382 kubelet[3205]: W0904 17:44:52.993020 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:52.995382 kubelet[3205]: E0904 17:44:52.993039 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:52.995382 kubelet[3205]: E0904 17:44:52.993343 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:52.995382 kubelet[3205]: W0904 17:44:52.993355 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:52.995382 kubelet[3205]: E0904 17:44:52.994423 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:52.995382 kubelet[3205]: I0904 17:44:52.994624 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/be0ad617-90e9-4ff4-80e2-c29502c9b417-registration-dir\") pod \"csi-node-driver-cm46f\" (UID: \"be0ad617-90e9-4ff4-80e2-c29502c9b417\") " pod="calico-system/csi-node-driver-cm46f" Sep 4 17:44:52.995382 kubelet[3205]: E0904 17:44:52.994680 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:52.995382 kubelet[3205]: W0904 17:44:52.994689 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:52.995794 kubelet[3205]: E0904 17:44:52.994707 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:52.995794 kubelet[3205]: E0904 17:44:52.995061 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:52.995794 kubelet[3205]: W0904 17:44:52.995072 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:52.995794 kubelet[3205]: E0904 17:44:52.995109 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:52.995794 kubelet[3205]: E0904 17:44:52.995546 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:52.995794 kubelet[3205]: W0904 17:44:52.995560 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:52.995794 kubelet[3205]: E0904 17:44:52.995581 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.000006 kubelet[3205]: E0904 17:44:52.999494 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.000006 kubelet[3205]: W0904 17:44:52.999514 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.000006 kubelet[3205]: E0904 17:44:52.999532 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.000006 kubelet[3205]: I0904 17:44:52.999562 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/be0ad617-90e9-4ff4-80e2-c29502c9b417-varrun\") pod \"csi-node-driver-cm46f\" (UID: \"be0ad617-90e9-4ff4-80e2-c29502c9b417\") " pod="calico-system/csi-node-driver-cm46f" Sep 4 17:44:53.001984 kubelet[3205]: E0904 17:44:53.001734 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.001984 kubelet[3205]: W0904 17:44:53.001752 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.001984 kubelet[3205]: E0904 17:44:53.001774 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.001984 kubelet[3205]: I0904 17:44:53.001804 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/be0ad617-90e9-4ff4-80e2-c29502c9b417-socket-dir\") pod \"csi-node-driver-cm46f\" (UID: \"be0ad617-90e9-4ff4-80e2-c29502c9b417\") " pod="calico-system/csi-node-driver-cm46f" Sep 4 17:44:53.002586 kubelet[3205]: E0904 17:44:53.002395 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.002586 kubelet[3205]: W0904 17:44:53.002413 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.002586 kubelet[3205]: E0904 17:44:53.002430 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.002586 kubelet[3205]: I0904 17:44:53.002458 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be0ad617-90e9-4ff4-80e2-c29502c9b417-kubelet-dir\") pod \"csi-node-driver-cm46f\" (UID: \"be0ad617-90e9-4ff4-80e2-c29502c9b417\") " pod="calico-system/csi-node-driver-cm46f" Sep 4 17:44:53.003295 kubelet[3205]: E0904 17:44:53.003197 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.003295 kubelet[3205]: W0904 17:44:53.003216 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.003295 kubelet[3205]: E0904 17:44:53.003234 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.004777 kubelet[3205]: E0904 17:44:53.004700 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.004777 kubelet[3205]: W0904 17:44:53.004714 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.004777 kubelet[3205]: E0904 17:44:53.004732 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.007431 kubelet[3205]: E0904 17:44:53.005621 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.007431 kubelet[3205]: W0904 17:44:53.005634 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.007431 kubelet[3205]: E0904 17:44:53.005655 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.007431 kubelet[3205]: E0904 17:44:53.006335 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.007431 kubelet[3205]: W0904 17:44:53.006347 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.007431 kubelet[3205]: E0904 17:44:53.006432 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.008141 kubelet[3205]: E0904 17:44:53.008120 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.008141 kubelet[3205]: W0904 17:44:53.008139 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.008267 kubelet[3205]: E0904 17:44:53.008156 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.008799 kubelet[3205]: E0904 17:44:53.008785 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.008905 kubelet[3205]: W0904 17:44:53.008893 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.009028 kubelet[3205]: E0904 17:44:53.009017 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.018611 containerd[1715]: time="2024-09-04T17:44:53.018496792Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:44:53.019673 containerd[1715]: time="2024-09-04T17:44:53.019574217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:44:53.019673 containerd[1715]: time="2024-09-04T17:44:53.019601118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:44:53.020072 containerd[1715]: time="2024-09-04T17:44:53.019894325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:44:53.056154 systemd[1]: Started cri-containerd-2913fd53c6701e415755e067c4943e4aadacb9fa1c31fab8a244f2cbae6eae8c.scope - libcontainer container 2913fd53c6701e415755e067c4943e4aadacb9fa1c31fab8a244f2cbae6eae8c. Sep 4 17:44:53.104549 kubelet[3205]: E0904 17:44:53.104359 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.104549 kubelet[3205]: W0904 17:44:53.104381 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.104549 kubelet[3205]: E0904 17:44:53.104409 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.106363 kubelet[3205]: E0904 17:44:53.106157 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.106363 kubelet[3205]: W0904 17:44:53.106175 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.106363 kubelet[3205]: E0904 17:44:53.106204 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.107055 kubelet[3205]: E0904 17:44:53.106488 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.107055 kubelet[3205]: W0904 17:44:53.106499 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.107055 kubelet[3205]: E0904 17:44:53.106654 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.107055 kubelet[3205]: E0904 17:44:53.106708 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.107055 kubelet[3205]: W0904 17:44:53.106717 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.107055 kubelet[3205]: E0904 17:44:53.106747 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.107449 kubelet[3205]: E0904 17:44:53.107332 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.107449 kubelet[3205]: W0904 17:44:53.107346 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.107449 kubelet[3205]: E0904 17:44:53.107363 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.108349 kubelet[3205]: E0904 17:44:53.108087 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.108349 kubelet[3205]: W0904 17:44:53.108102 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.108349 kubelet[3205]: E0904 17:44:53.108117 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.110288 kubelet[3205]: E0904 17:44:53.110101 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.110288 kubelet[3205]: W0904 17:44:53.110117 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.110580 kubelet[3205]: E0904 17:44:53.110486 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.110580 kubelet[3205]: W0904 17:44:53.110499 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.113629 kubelet[3205]: E0904 17:44:53.113021 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.113629 kubelet[3205]: E0904 17:44:53.113067 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.113629 kubelet[3205]: E0904 17:44:53.113228 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.113629 kubelet[3205]: W0904 17:44:53.113243 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.113629 kubelet[3205]: E0904 17:44:53.113269 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.113629 kubelet[3205]: E0904 17:44:53.113493 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.113629 kubelet[3205]: W0904 17:44:53.113504 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.113629 kubelet[3205]: E0904 17:44:53.113519 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.114164 kubelet[3205]: E0904 17:44:53.114062 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.114164 kubelet[3205]: W0904 17:44:53.114076 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.114164 kubelet[3205]: E0904 17:44:53.114094 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.114481 kubelet[3205]: E0904 17:44:53.114465 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.114481 kubelet[3205]: W0904 17:44:53.114481 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.114718 kubelet[3205]: E0904 17:44:53.114505 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.115201 kubelet[3205]: E0904 17:44:53.115045 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.115201 kubelet[3205]: W0904 17:44:53.115059 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.115201 kubelet[3205]: E0904 17:44:53.115118 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.115903 kubelet[3205]: E0904 17:44:53.115314 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.115903 kubelet[3205]: W0904 17:44:53.115325 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.115903 kubelet[3205]: E0904 17:44:53.115361 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.115903 kubelet[3205]: E0904 17:44:53.115508 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.115903 kubelet[3205]: W0904 17:44:53.115518 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.115903 kubelet[3205]: E0904 17:44:53.115561 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.115903 kubelet[3205]: E0904 17:44:53.115701 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.115903 kubelet[3205]: W0904 17:44:53.115711 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.115903 kubelet[3205]: E0904 17:44:53.115739 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.117067 kubelet[3205]: E0904 17:44:53.115924 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.117067 kubelet[3205]: W0904 17:44:53.115950 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.117067 kubelet[3205]: E0904 17:44:53.115981 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.117067 kubelet[3205]: E0904 17:44:53.116169 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.117067 kubelet[3205]: W0904 17:44:53.116180 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.117067 kubelet[3205]: E0904 17:44:53.116200 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.117067 kubelet[3205]: E0904 17:44:53.116403 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.117067 kubelet[3205]: W0904 17:44:53.116414 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.117067 kubelet[3205]: E0904 17:44:53.116434 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.117067 kubelet[3205]: E0904 17:44:53.116640 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.119318 kubelet[3205]: W0904 17:44:53.116651 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.119318 kubelet[3205]: E0904 17:44:53.116672 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.119318 kubelet[3205]: E0904 17:44:53.118052 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.119318 kubelet[3205]: W0904 17:44:53.118067 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.119318 kubelet[3205]: E0904 17:44:53.118091 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.119318 kubelet[3205]: E0904 17:44:53.118314 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.119318 kubelet[3205]: W0904 17:44:53.118326 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.119318 kubelet[3205]: E0904 17:44:53.118418 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.119318 kubelet[3205]: E0904 17:44:53.118563 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.119318 kubelet[3205]: W0904 17:44:53.118573 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.119746 kubelet[3205]: E0904 17:44:53.118601 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.119746 kubelet[3205]: E0904 17:44:53.118803 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.119746 kubelet[3205]: W0904 17:44:53.118813 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.119746 kubelet[3205]: E0904 17:44:53.118830 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.119746 kubelet[3205]: E0904 17:44:53.119223 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.119746 kubelet[3205]: W0904 17:44:53.119236 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.119746 kubelet[3205]: E0904 17:44:53.119261 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.120033 kubelet[3205]: E0904 17:44:53.120017 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.120033 kubelet[3205]: W0904 17:44:53.120029 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.120113 kubelet[3205]: E0904 17:44:53.120046 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.124775 containerd[1715]: time="2024-09-04T17:44:53.124701199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d578657f8-q58fm,Uid:b94663c4-0cb9-4504-aedc-c25d80866dcb,Namespace:calico-system,Attempt:0,} returns sandbox id \"2913fd53c6701e415755e067c4943e4aadacb9fa1c31fab8a244f2cbae6eae8c\"" Sep 4 17:44:53.128883 containerd[1715]: time="2024-09-04T17:44:53.127815073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 17:44:53.132018 kubelet[3205]: E0904 17:44:53.131963 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.132018 kubelet[3205]: W0904 17:44:53.131990 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.132254 kubelet[3205]: E0904 17:44:53.132014 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.216507 kubelet[3205]: E0904 17:44:53.216106 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.216507 kubelet[3205]: W0904 17:44:53.216232 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.216507 kubelet[3205]: E0904 17:44:53.216262 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.318210 kubelet[3205]: E0904 17:44:53.318167 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.318210 kubelet[3205]: W0904 17:44:53.318204 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.318448 kubelet[3205]: E0904 17:44:53.318231 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.419776 kubelet[3205]: E0904 17:44:53.419719 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.419776 kubelet[3205]: W0904 17:44:53.419771 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.420083 kubelet[3205]: E0904 17:44:53.419799 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.521697 kubelet[3205]: E0904 17:44:53.521420 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.521697 kubelet[3205]: W0904 17:44:53.521447 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.521697 kubelet[3205]: E0904 17:44:53.521474 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.622624 kubelet[3205]: E0904 17:44:53.622584 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.622624 kubelet[3205]: W0904 17:44:53.622637 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.622624 kubelet[3205]: E0904 17:44:53.622666 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.724212 kubelet[3205]: E0904 17:44:53.724163 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.724871 kubelet[3205]: W0904 17:44:53.724759 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.724871 kubelet[3205]: E0904 17:44:53.724821 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.758629 kubelet[3205]: E0904 17:44:53.758597 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:53.758629 kubelet[3205]: W0904 17:44:53.758618 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:53.758629 kubelet[3205]: E0904 17:44:53.758642 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:53.970326 containerd[1715]: time="2024-09-04T17:44:53.970270663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pg9qn,Uid:a9034484-3100-4d3a-8734-39d62992ecbc,Namespace:calico-system,Attempt:0,}" Sep 4 17:44:54.040296 containerd[1715]: time="2024-09-04T17:44:54.040027510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:44:54.040296 containerd[1715]: time="2024-09-04T17:44:54.040102212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:44:54.040296 containerd[1715]: time="2024-09-04T17:44:54.040117512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:44:54.040296 containerd[1715]: time="2024-09-04T17:44:54.040207514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:44:54.089139 systemd[1]: Started cri-containerd-47ba4ca3b5c3c7cd90b3635c47ed65bcdb4b4add28a3f16518962ad398ea3825.scope - libcontainer container 47ba4ca3b5c3c7cd90b3635c47ed65bcdb4b4add28a3f16518962ad398ea3825. Sep 4 17:44:54.161969 containerd[1715]: time="2024-09-04T17:44:54.160746160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pg9qn,Uid:a9034484-3100-4d3a-8734-39d62992ecbc,Namespace:calico-system,Attempt:0,} returns sandbox id \"47ba4ca3b5c3c7cd90b3635c47ed65bcdb4b4add28a3f16518962ad398ea3825\"" Sep 4 17:44:55.179472 kubelet[3205]: E0904 17:44:55.178915 3205 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cm46f" podUID="be0ad617-90e9-4ff4-80e2-c29502c9b417" Sep 4 17:44:55.200275 containerd[1715]: time="2024-09-04T17:44:55.200215427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:55.204750 containerd[1715]: time="2024-09-04T17:44:55.204694726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Sep 4 17:44:55.208245 containerd[1715]: time="2024-09-04T17:44:55.208062800Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:55.214185 containerd[1715]: time="2024-09-04T17:44:55.214134034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:55.215779 containerd[1715]: time="2024-09-04T17:44:55.215471963Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 2.08761279s" Sep 4 17:44:55.215779 containerd[1715]: time="2024-09-04T17:44:55.215511964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Sep 4 17:44:55.219959 containerd[1715]: time="2024-09-04T17:44:55.217669912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 17:44:55.248439 containerd[1715]: time="2024-09-04T17:44:55.248399688Z" level=info msg="CreateContainer within sandbox \"2913fd53c6701e415755e067c4943e4aadacb9fa1c31fab8a244f2cbae6eae8c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:44:55.299659 containerd[1715]: time="2024-09-04T17:44:55.299616516Z" level=info msg="CreateContainer within sandbox \"2913fd53c6701e415755e067c4943e4aadacb9fa1c31fab8a244f2cbae6eae8c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"393d5737a5b0e1bd9ad8f84f7fdfaae86e089d41276ecfea069b0d38da0058e2\"" Sep 4 17:44:55.300291 containerd[1715]: time="2024-09-04T17:44:55.300123427Z" level=info msg="StartContainer for \"393d5737a5b0e1bd9ad8f84f7fdfaae86e089d41276ecfea069b0d38da0058e2\"" Sep 4 17:44:55.341682 systemd[1]: Started cri-containerd-393d5737a5b0e1bd9ad8f84f7fdfaae86e089d41276ecfea069b0d38da0058e2.scope - libcontainer container 393d5737a5b0e1bd9ad8f84f7fdfaae86e089d41276ecfea069b0d38da0058e2. Sep 4 17:44:55.421154 containerd[1715]: time="2024-09-04T17:44:55.420482178Z" level=info msg="StartContainer for \"393d5737a5b0e1bd9ad8f84f7fdfaae86e089d41276ecfea069b0d38da0058e2\" returns successfully" Sep 4 17:44:56.298965 kubelet[3205]: E0904 17:44:56.297188 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.298965 kubelet[3205]: W0904 17:44:56.297215 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.298965 kubelet[3205]: E0904 17:44:56.297242 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.298965 kubelet[3205]: E0904 17:44:56.297466 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.298965 kubelet[3205]: W0904 17:44:56.297477 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.298965 kubelet[3205]: E0904 17:44:56.297492 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.298965 kubelet[3205]: E0904 17:44:56.297667 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.298965 kubelet[3205]: W0904 17:44:56.297677 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.298965 kubelet[3205]: E0904 17:44:56.297693 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.298965 kubelet[3205]: E0904 17:44:56.297902 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.301749 kubelet[3205]: W0904 17:44:56.297913 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.301749 kubelet[3205]: E0904 17:44:56.297927 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.301749 kubelet[3205]: E0904 17:44:56.298136 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.301749 kubelet[3205]: W0904 17:44:56.298146 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.301749 kubelet[3205]: E0904 17:44:56.298159 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.301749 kubelet[3205]: E0904 17:44:56.298325 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.301749 kubelet[3205]: W0904 17:44:56.298335 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.301749 kubelet[3205]: E0904 17:44:56.298353 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.301749 kubelet[3205]: E0904 17:44:56.298534 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.301749 kubelet[3205]: W0904 17:44:56.298543 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.302267 kubelet[3205]: E0904 17:44:56.298557 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.302267 kubelet[3205]: E0904 17:44:56.298726 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.302267 kubelet[3205]: W0904 17:44:56.298735 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.302267 kubelet[3205]: E0904 17:44:56.298748 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.302267 kubelet[3205]: E0904 17:44:56.298949 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.302267 kubelet[3205]: W0904 17:44:56.298960 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.302267 kubelet[3205]: E0904 17:44:56.298975 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.302267 kubelet[3205]: E0904 17:44:56.299162 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.302267 kubelet[3205]: W0904 17:44:56.299171 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.302267 kubelet[3205]: E0904 17:44:56.299185 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.302660 kubelet[3205]: E0904 17:44:56.299355 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.302660 kubelet[3205]: W0904 17:44:56.299363 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.302660 kubelet[3205]: E0904 17:44:56.299379 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.302660 kubelet[3205]: E0904 17:44:56.299536 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.302660 kubelet[3205]: W0904 17:44:56.299545 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.302660 kubelet[3205]: E0904 17:44:56.299558 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.302660 kubelet[3205]: E0904 17:44:56.299717 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.302660 kubelet[3205]: W0904 17:44:56.299725 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.302660 kubelet[3205]: E0904 17:44:56.299737 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.302660 kubelet[3205]: E0904 17:44:56.299887 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.303059 kubelet[3205]: W0904 17:44:56.299896 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.303059 kubelet[3205]: E0904 17:44:56.299909 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.303059 kubelet[3205]: E0904 17:44:56.300072 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.303059 kubelet[3205]: W0904 17:44:56.300083 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.303059 kubelet[3205]: E0904 17:44:56.300097 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.308371 kubelet[3205]: I0904 17:44:56.307927 3205 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-d578657f8-q58fm" podStartSLOduration=2.217949178 podStartE2EDuration="4.307788318s" podCreationTimestamp="2024-09-04 17:44:52 +0000 UTC" firstStartedPulling="2024-09-04 17:44:53.126884551 +0000 UTC m=+18.571050814" lastFinishedPulling="2024-09-04 17:44:55.216723791 +0000 UTC m=+20.660889954" observedRunningTime="2024-09-04 17:44:56.307707216 +0000 UTC m=+21.751873479" watchObservedRunningTime="2024-09-04 17:44:56.307788318 +0000 UTC m=+21.751954581" Sep 4 17:44:56.343351 kubelet[3205]: E0904 17:44:56.343316 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.343538 kubelet[3205]: W0904 17:44:56.343453 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.343538 kubelet[3205]: E0904 17:44:56.343492 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.344023 kubelet[3205]: E0904 17:44:56.343973 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.344023 kubelet[3205]: W0904 17:44:56.343995 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.344452 kubelet[3205]: E0904 17:44:56.344024 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.344770 kubelet[3205]: E0904 17:44:56.344687 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.344770 kubelet[3205]: W0904 17:44:56.344758 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.345058 kubelet[3205]: E0904 17:44:56.344908 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.345539 kubelet[3205]: E0904 17:44:56.345452 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.345539 kubelet[3205]: W0904 17:44:56.345469 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.345733 kubelet[3205]: E0904 17:44:56.345618 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.346664 kubelet[3205]: E0904 17:44:56.346640 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.346664 kubelet[3205]: W0904 17:44:56.346657 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.347015 kubelet[3205]: E0904 17:44:56.346687 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.347015 kubelet[3205]: E0904 17:44:56.346926 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.347015 kubelet[3205]: W0904 17:44:56.346976 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.347181 kubelet[3205]: E0904 17:44:56.347087 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.347491 kubelet[3205]: E0904 17:44:56.347471 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.348058 kubelet[3205]: W0904 17:44:56.347486 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.348401 kubelet[3205]: E0904 17:44:56.348314 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.348401 kubelet[3205]: W0904 17:44:56.348331 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.348726 kubelet[3205]: E0904 17:44:56.348709 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.348789 kubelet[3205]: W0904 17:44:56.348728 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.348789 kubelet[3205]: E0904 17:44:56.348748 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.349211 kubelet[3205]: E0904 17:44:56.349182 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.349289 kubelet[3205]: E0904 17:44:56.349224 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.349358 kubelet[3205]: E0904 17:44:56.349293 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.349358 kubelet[3205]: W0904 17:44:56.349304 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.349358 kubelet[3205]: E0904 17:44:56.349328 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.349862 kubelet[3205]: E0904 17:44:56.349832 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.349862 kubelet[3205]: W0904 17:44:56.349847 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.350171 kubelet[3205]: E0904 17:44:56.349872 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.350352 kubelet[3205]: E0904 17:44:56.350325 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.350423 kubelet[3205]: W0904 17:44:56.350341 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.350423 kubelet[3205]: E0904 17:44:56.350393 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.350726 kubelet[3205]: E0904 17:44:56.350700 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.350726 kubelet[3205]: W0904 17:44:56.350714 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.350914 kubelet[3205]: E0904 17:44:56.350833 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.351369 kubelet[3205]: E0904 17:44:56.351190 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.351369 kubelet[3205]: W0904 17:44:56.351204 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.351369 kubelet[3205]: E0904 17:44:56.351231 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.351791 kubelet[3205]: E0904 17:44:56.351780 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.351868 kubelet[3205]: W0904 17:44:56.351857 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.352301 kubelet[3205]: E0904 17:44:56.351991 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.352566 kubelet[3205]: E0904 17:44:56.352552 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.352652 kubelet[3205]: W0904 17:44:56.352640 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.352798 kubelet[3205]: E0904 17:44:56.352717 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.353185 kubelet[3205]: E0904 17:44:56.353130 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.353185 kubelet[3205]: W0904 17:44:56.353144 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.353185 kubelet[3205]: E0904 17:44:56.353159 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.353929 kubelet[3205]: E0904 17:44:56.353905 3205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:44:56.353929 kubelet[3205]: W0904 17:44:56.353921 3205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:44:56.353929 kubelet[3205]: E0904 17:44:56.353959 3205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:44:56.512424 containerd[1715]: time="2024-09-04T17:44:56.512369923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:56.514822 containerd[1715]: time="2024-09-04T17:44:56.514761375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Sep 4 17:44:56.520485 containerd[1715]: time="2024-09-04T17:44:56.520426800Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:56.525604 containerd[1715]: time="2024-09-04T17:44:56.524553091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:56.525604 containerd[1715]: time="2024-09-04T17:44:56.525417310Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.307697598s" Sep 4 17:44:56.525604 containerd[1715]: time="2024-09-04T17:44:56.525451411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Sep 4 17:44:56.527885 containerd[1715]: time="2024-09-04T17:44:56.527850064Z" level=info msg="CreateContainer within sandbox \"47ba4ca3b5c3c7cd90b3635c47ed65bcdb4b4add28a3f16518962ad398ea3825\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:44:56.568996 containerd[1715]: time="2024-09-04T17:44:56.568850766Z" level=info msg="CreateContainer within sandbox \"47ba4ca3b5c3c7cd90b3635c47ed65bcdb4b4add28a3f16518962ad398ea3825\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"dd3e0dc06c6513d209234d4bd7cace2f9dd360ee2bbdb26501ca411d8968fbdf\"" Sep 4 17:44:56.572130 containerd[1715]: time="2024-09-04T17:44:56.570656006Z" level=info msg="StartContainer for \"dd3e0dc06c6513d209234d4bd7cace2f9dd360ee2bbdb26501ca411d8968fbdf\"" Sep 4 17:44:56.609150 systemd[1]: Started cri-containerd-dd3e0dc06c6513d209234d4bd7cace2f9dd360ee2bbdb26501ca411d8968fbdf.scope - libcontainer container dd3e0dc06c6513d209234d4bd7cace2f9dd360ee2bbdb26501ca411d8968fbdf. Sep 4 17:44:56.658813 containerd[1715]: time="2024-09-04T17:44:56.658773647Z" level=info msg="StartContainer for \"dd3e0dc06c6513d209234d4bd7cace2f9dd360ee2bbdb26501ca411d8968fbdf\" returns successfully" Sep 4 17:44:56.664634 systemd[1]: cri-containerd-dd3e0dc06c6513d209234d4bd7cace2f9dd360ee2bbdb26501ca411d8968fbdf.scope: Deactivated successfully. Sep 4 17:44:56.710733 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd3e0dc06c6513d209234d4bd7cace2f9dd360ee2bbdb26501ca411d8968fbdf-rootfs.mount: Deactivated successfully. Sep 4 17:44:57.178679 kubelet[3205]: E0904 17:44:57.177367 3205 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cm46f" podUID="be0ad617-90e9-4ff4-80e2-c29502c9b417" Sep 4 17:44:57.290252 kubelet[3205]: I0904 17:44:57.290222 3205 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:44:57.961785 containerd[1715]: time="2024-09-04T17:44:57.961711339Z" level=info msg="shim disconnected" id=dd3e0dc06c6513d209234d4bd7cace2f9dd360ee2bbdb26501ca411d8968fbdf namespace=k8s.io Sep 4 17:44:57.961785 containerd[1715]: time="2024-09-04T17:44:57.961776941Z" level=warning msg="cleaning up after shim disconnected" id=dd3e0dc06c6513d209234d4bd7cace2f9dd360ee2bbdb26501ca411d8968fbdf namespace=k8s.io Sep 4 17:44:57.961785 containerd[1715]: time="2024-09-04T17:44:57.961789541Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:44:58.294410 containerd[1715]: time="2024-09-04T17:44:58.294088159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 17:44:59.179012 kubelet[3205]: E0904 17:44:59.177477 3205 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cm46f" podUID="be0ad617-90e9-4ff4-80e2-c29502c9b417" Sep 4 17:44:59.913246 kubelet[3205]: I0904 17:44:59.912821 3205 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:45:01.178838 kubelet[3205]: E0904 17:45:01.178789 3205 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cm46f" podUID="be0ad617-90e9-4ff4-80e2-c29502c9b417" Sep 4 17:45:02.102965 containerd[1715]: time="2024-09-04T17:45:02.102899474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:45:02.105834 containerd[1715]: time="2024-09-04T17:45:02.105777343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Sep 4 17:45:02.109240 containerd[1715]: time="2024-09-04T17:45:02.109182225Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:45:02.113944 containerd[1715]: time="2024-09-04T17:45:02.113877737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:45:02.115107 containerd[1715]: time="2024-09-04T17:45:02.114540253Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 3.820404394s" Sep 4 17:45:02.115107 containerd[1715]: time="2024-09-04T17:45:02.114577154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Sep 4 17:45:02.116901 containerd[1715]: time="2024-09-04T17:45:02.116870409Z" level=info msg="CreateContainer within sandbox \"47ba4ca3b5c3c7cd90b3635c47ed65bcdb4b4add28a3f16518962ad398ea3825\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 17:45:02.152424 containerd[1715]: time="2024-09-04T17:45:02.152382159Z" level=info msg="CreateContainer within sandbox \"47ba4ca3b5c3c7cd90b3635c47ed65bcdb4b4add28a3f16518962ad398ea3825\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7584310ab591da7dba0215f33f953591be58d5495e63f8f20a92b70ddb696550\"" Sep 4 17:45:02.152912 containerd[1715]: time="2024-09-04T17:45:02.152770368Z" level=info msg="StartContainer for \"7584310ab591da7dba0215f33f953591be58d5495e63f8f20a92b70ddb696550\"" Sep 4 17:45:02.188067 systemd[1]: Started cri-containerd-7584310ab591da7dba0215f33f953591be58d5495e63f8f20a92b70ddb696550.scope - libcontainer container 7584310ab591da7dba0215f33f953591be58d5495e63f8f20a92b70ddb696550. Sep 4 17:45:02.216533 containerd[1715]: time="2024-09-04T17:45:02.216402291Z" level=info msg="StartContainer for \"7584310ab591da7dba0215f33f953591be58d5495e63f8f20a92b70ddb696550\" returns successfully" Sep 4 17:45:03.177371 kubelet[3205]: E0904 17:45:03.176892 3205 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cm46f" podUID="be0ad617-90e9-4ff4-80e2-c29502c9b417" Sep 4 17:45:03.627956 containerd[1715]: time="2024-09-04T17:45:03.627836578Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:45:03.629874 systemd[1]: cri-containerd-7584310ab591da7dba0215f33f953591be58d5495e63f8f20a92b70ddb696550.scope: Deactivated successfully. Sep 4 17:45:03.653146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7584310ab591da7dba0215f33f953591be58d5495e63f8f20a92b70ddb696550-rootfs.mount: Deactivated successfully. Sep 4 17:45:03.700797 kubelet[3205]: I0904 17:45:03.699817 3205 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Sep 4 17:45:04.104531 kubelet[3205]: I0904 17:45:03.722256 3205 topology_manager.go:215] "Topology Admit Handler" podUID="e7fb02c2-be95-4a70-bfdb-d1ac140a6478" podNamespace="kube-system" podName="coredns-76f75df574-z98pm" Sep 4 17:45:04.104531 kubelet[3205]: I0904 17:45:03.728046 3205 topology_manager.go:215] "Topology Admit Handler" podUID="ee5bb778-32b6-45f7-a2da-cca7834e3b0b" podNamespace="calico-system" podName="calico-kube-controllers-7dfdb78649-8k6wh" Sep 4 17:45:04.104531 kubelet[3205]: I0904 17:45:03.730556 3205 topology_manager.go:215] "Topology Admit Handler" podUID="a3dec762-fd79-4f26-8e9e-de588a8c2d0d" podNamespace="kube-system" podName="coredns-76f75df574-mngb6" Sep 4 17:45:04.104531 kubelet[3205]: W0904 17:45:03.731678 3205 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4054.1.0-a-c31d97b133" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4054.1.0-a-c31d97b133' and this object Sep 4 17:45:04.104531 kubelet[3205]: E0904 17:45:03.731719 3205 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4054.1.0-a-c31d97b133" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4054.1.0-a-c31d97b133' and this object Sep 4 17:45:04.104531 kubelet[3205]: I0904 17:45:03.797066 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee5bb778-32b6-45f7-a2da-cca7834e3b0b-tigera-ca-bundle\") pod \"calico-kube-controllers-7dfdb78649-8k6wh\" (UID: \"ee5bb778-32b6-45f7-a2da-cca7834e3b0b\") " pod="calico-system/calico-kube-controllers-7dfdb78649-8k6wh" Sep 4 17:45:03.740666 systemd[1]: Created slice kubepods-burstable-pode7fb02c2_be95_4a70_bfdb_d1ac140a6478.slice - libcontainer container kubepods-burstable-pode7fb02c2_be95_4a70_bfdb_d1ac140a6478.slice. Sep 4 17:45:04.105019 kubelet[3205]: I0904 17:45:03.797163 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkp6q\" (UniqueName: \"kubernetes.io/projected/ee5bb778-32b6-45f7-a2da-cca7834e3b0b-kube-api-access-fkp6q\") pod \"calico-kube-controllers-7dfdb78649-8k6wh\" (UID: \"ee5bb778-32b6-45f7-a2da-cca7834e3b0b\") " pod="calico-system/calico-kube-controllers-7dfdb78649-8k6wh" Sep 4 17:45:04.105019 kubelet[3205]: I0904 17:45:03.797205 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qbgg\" (UniqueName: \"kubernetes.io/projected/e7fb02c2-be95-4a70-bfdb-d1ac140a6478-kube-api-access-4qbgg\") pod \"coredns-76f75df574-z98pm\" (UID: \"e7fb02c2-be95-4a70-bfdb-d1ac140a6478\") " pod="kube-system/coredns-76f75df574-z98pm" Sep 4 17:45:04.105019 kubelet[3205]: I0904 17:45:03.797315 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3dec762-fd79-4f26-8e9e-de588a8c2d0d-config-volume\") pod \"coredns-76f75df574-mngb6\" (UID: \"a3dec762-fd79-4f26-8e9e-de588a8c2d0d\") " pod="kube-system/coredns-76f75df574-mngb6" Sep 4 17:45:04.105019 kubelet[3205]: I0904 17:45:03.797398 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xhz7\" (UniqueName: \"kubernetes.io/projected/a3dec762-fd79-4f26-8e9e-de588a8c2d0d-kube-api-access-4xhz7\") pod \"coredns-76f75df574-mngb6\" (UID: \"a3dec762-fd79-4f26-8e9e-de588a8c2d0d\") " pod="kube-system/coredns-76f75df574-mngb6" Sep 4 17:45:04.105019 kubelet[3205]: I0904 17:45:03.797427 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7fb02c2-be95-4a70-bfdb-d1ac140a6478-config-volume\") pod \"coredns-76f75df574-z98pm\" (UID: \"e7fb02c2-be95-4a70-bfdb-d1ac140a6478\") " pod="kube-system/coredns-76f75df574-z98pm" Sep 4 17:45:03.751042 systemd[1]: Created slice kubepods-besteffort-podee5bb778_32b6_45f7_a2da_cca7834e3b0b.slice - libcontainer container kubepods-besteffort-podee5bb778_32b6_45f7_a2da_cca7834e3b0b.slice. Sep 4 17:45:03.761036 systemd[1]: Created slice kubepods-burstable-poda3dec762_fd79_4f26_8e9e_de588a8c2d0d.slice - libcontainer container kubepods-burstable-poda3dec762_fd79_4f26_8e9e_de588a8c2d0d.slice. Sep 4 17:45:04.455225 containerd[1715]: time="2024-09-04T17:45:04.455078819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dfdb78649-8k6wh,Uid:ee5bb778-32b6-45f7-a2da-cca7834e3b0b,Namespace:calico-system,Attempt:0,}" Sep 4 17:45:05.051392 containerd[1715]: time="2024-09-04T17:45:05.051078442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mngb6,Uid:a3dec762-fd79-4f26-8e9e-de588a8c2d0d,Namespace:kube-system,Attempt:0,}" Sep 4 17:45:05.051392 containerd[1715]: time="2024-09-04T17:45:05.051078542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z98pm,Uid:e7fb02c2-be95-4a70-bfdb-d1ac140a6478,Namespace:kube-system,Attempt:0,}" Sep 4 17:45:05.183446 systemd[1]: Created slice kubepods-besteffort-podbe0ad617_90e9_4ff4_80e2_c29502c9b417.slice - libcontainer container kubepods-besteffort-podbe0ad617_90e9_4ff4_80e2_c29502c9b417.slice. Sep 4 17:45:05.185607 containerd[1715]: time="2024-09-04T17:45:05.185557651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cm46f,Uid:be0ad617-90e9-4ff4-80e2-c29502c9b417,Namespace:calico-system,Attempt:0,}" Sep 4 17:45:05.220211 containerd[1715]: time="2024-09-04T17:45:05.220069274Z" level=info msg="shim disconnected" id=7584310ab591da7dba0215f33f953591be58d5495e63f8f20a92b70ddb696550 namespace=k8s.io Sep 4 17:45:05.220211 containerd[1715]: time="2024-09-04T17:45:05.220126476Z" level=warning msg="cleaning up after shim disconnected" id=7584310ab591da7dba0215f33f953591be58d5495e63f8f20a92b70ddb696550 namespace=k8s.io Sep 4 17:45:05.220211 containerd[1715]: time="2024-09-04T17:45:05.220140076Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:45:05.233782 containerd[1715]: time="2024-09-04T17:45:05.233739001Z" level=warning msg="cleanup warnings time=\"2024-09-04T17:45:05Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 17:45:05.325083 containerd[1715]: time="2024-09-04T17:45:05.323654046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 17:45:05.483747 containerd[1715]: time="2024-09-04T17:45:05.483700366Z" level=error msg="Failed to destroy network for sandbox \"1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:45:05.484091 containerd[1715]: time="2024-09-04T17:45:05.484057774Z" level=error msg="encountered an error cleaning up failed sandbox \"1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:45:05.484180 containerd[1715]: time="2024-09-04T17:45:05.484119576Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dfdb78649-8k6wh,Uid:ee5bb778-32b6-45f7-a2da-cca7834e3b0b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:45:05.485021 kubelet[3205]: E0904 17:45:05.484543 3205 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:45:05.485021 kubelet[3205]: E0904 17:45:05.484622 3205 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7dfdb78649-8k6wh" Sep 4 17:45:05.485021 kubelet[3205]: E0904 17:45:05.484652 3205 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7dfdb78649-8k6wh" Sep 4 17:45:05.485488 kubelet[3205]: E0904 17:45:05.484736 3205 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7dfdb78649-8k6wh_calico-system(ee5bb778-32b6-45f7-a2da-cca7834e3b0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7dfdb78649-8k6wh_calico-system(ee5bb778-32b6-45f7-a2da-cca7834e3b0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7dfdb78649-8k6wh" podUID="ee5bb778-32b6-45f7-a2da-cca7834e3b0b" Sep 4 17:45:05.491776 containerd[1715]: time="2024-09-04T17:45:05.491733457Z" level=error msg="Failed to destroy network for sandbox \"a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:45:05.492424 containerd[1715]: time="2024-09-04T17:45:05.492382473Z" level=error msg="encountered an error cleaning up failed sandbox \"a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:45:05.494245 containerd[1715]: time="2024-09-04T17:45:05.492459275Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z98pm,Uid:e7fb02c2-be95-4a70-bfdb-d1ac140a6478,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:45:05.494342 kubelet[3205]: E0904 17:45:05.492716 3205 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:45:05.494342 kubelet[3205]: E0904 17:45:05.492786 3205 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-z98pm" Sep 4 17:45:05.494342 kubelet[3205]: E0904 17:45:05.492813 3205 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-z98pm" Sep 4 17:45:05.494862 kubelet[3205]: E0904 17:45:05.493810 3205 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-z98pm_kube-system(e7fb02c2-be95-4a70-bfdb-d1ac140a6478)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-z98pm_kube-system(e7fb02c2-be95-4a70-bfdb-d1ac140a6478)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-z98pm" podUID="e7fb02c2-be95-4a70-bfdb-d1ac140a6478" Sep 4 17:45:05.499736 containerd[1715]: time="2024-09-04T17:45:05.499696247Z" level=error msg="Failed to destroy network for sandbox \"d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:45:05.500156 containerd[1715]: time="2024-09-04T17:45:05.500035155Z" level=error msg="encountered an error cleaning up failed sandbox \"d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:45:05.500156 containerd[1715]: time="2024-09-04T17:45:05.500100057Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cm46f,Uid:be0ad617-90e9-4ff4-80e2-c29502c9b417,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:45:05.500395 kubelet[3205]: E0904 17:45:05.500326 3205 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:45:05.500395 kubelet[3205]: E0904 17:45:05.500377 3205 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cm46f" Sep 4 17:45:05.500747 kubelet[3205]: E0904 17:45:05.500408 3205 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cm46f" Sep 4 17:45:05.500747 kubelet[3205]: E0904 17:45:05.500476 3205 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cm46f_calico-system(be0ad617-90e9-4ff4-80e2-c29502c9b417)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cm46f_calico-system(be0ad617-90e9-4ff4-80e2-c29502c9b417)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cm46f" podUID="be0ad617-90e9-4ff4-80e2-c29502c9b417" Sep 4 17:45:05.509723 containerd[1715]: time="2024-09-04T17:45:05.509685286Z" level=error msg="Failed to destroy network for sandbox \"0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:45:05.510092 containerd[1715]: time="2024-09-04T17:45:05.510059795Z" level=error msg="encountered an error cleaning up failed sandbox \"0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:45:05.510215 containerd[1715]: time="2024-09-04T17:45:05.510117296Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mngb6,Uid:a3dec762-fd79-4f26-8e9e-de588a8c2d0d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:45:05.510388 kubelet[3205]: E0904 17:45:05.510364 3205 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:45:05.510474 kubelet[3205]: E0904 17:45:05.510423 3205 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mngb6" Sep 4 17:45:05.510474 kubelet[3205]: E0904 17:45:05.510453 3205 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mngb6" Sep 4 17:45:05.510616 kubelet[3205]: E0904 17:45:05.510588 3205 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-mngb6_kube-system(a3dec762-fd79-4f26-8e9e-de588a8c2d0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-mngb6_kube-system(a3dec762-fd79-4f26-8e9e-de588a8c2d0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-mngb6" podUID="a3dec762-fd79-4f26-8e9e-de588a8c2d0d" Sep 4 17:45:06.274194 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099-shm.mount: Deactivated successfully. Sep 4 17:45:06.274654 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266-shm.mount: Deactivated successfully. Sep 4 17:45:06.274755 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5-shm.mount: Deactivated successfully. Sep 4 17:45:06.323430 kubelet[3205]: I0904 17:45:06.322874 3205 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" Sep 4 17:45:06.323869 containerd[1715]: time="2024-09-04T17:45:06.323826614Z" level=info msg="StopPodSandbox for \"a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099\"" Sep 4 17:45:06.325044 containerd[1715]: time="2024-09-04T17:45:06.324620333Z" level=info msg="Ensure that sandbox a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099 in task-service has been cleanup successfully" Sep 4 17:45:06.326166 kubelet[3205]: I0904 17:45:06.325976 3205 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" Sep 4 17:45:06.328863 containerd[1715]: time="2024-09-04T17:45:06.328671130Z" level=info msg="StopPodSandbox for \"0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e\"" Sep 4 17:45:06.329311 kubelet[3205]: I0904 17:45:06.329291 3205 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" Sep 4 17:45:06.330996 containerd[1715]: time="2024-09-04T17:45:06.330148265Z" level=info msg="Ensure that sandbox 0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e in task-service has been cleanup successfully" Sep 4 17:45:06.332444 containerd[1715]: time="2024-09-04T17:45:06.331987609Z" level=info msg="StopPodSandbox for \"1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266\"" Sep 4 17:45:06.332444 containerd[1715]: time="2024-09-04T17:45:06.332158313Z" level=info msg="Ensure that sandbox 1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266 in task-service has been cleanup successfully" Sep 4 17:45:06.335444 kubelet[3205]: I0904 17:45:06.335284 3205 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" Sep 4 17:45:06.336334 containerd[1715]: time="2024-09-04T17:45:06.335906802Z" level=info msg="StopPodSandbox for \"d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5\"" Sep 4 17:45:06.336334 containerd[1715]: time="2024-09-04T17:45:06.336134508Z" level=info msg="Ensure that sandbox d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5 in task-service has been cleanup successfully" Sep 4 17:45:06.392148 containerd[1715]: time="2024-09-04T17:45:06.392084543Z" level=error msg="StopPodSandbox for \"a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099\" failed" error="failed to destroy network for sandbox \"a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:45:06.394425 kubelet[3205]: E0904 17:45:06.394160 3205 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" Sep 4 17:45:06.394425 kubelet[3205]: E0904 17:45:06.394294 3205 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099"} Sep 4 17:45:06.394425 kubelet[3205]: E0904 17:45:06.394360 3205 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e7fb02c2-be95-4a70-bfdb-d1ac140a6478\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:45:06.394425 kubelet[3205]: E0904 17:45:06.394400 3205 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e7fb02c2-be95-4a70-bfdb-d1ac140a6478\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-z98pm" podUID="e7fb02c2-be95-4a70-bfdb-d1ac140a6478" Sep 4 17:45:06.402660 containerd[1715]: time="2024-09-04T17:45:06.402583193Z" level=error msg="StopPodSandbox for \"0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e\" failed" error="failed to destroy network for sandbox \"0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:45:06.403069 kubelet[3205]: E0904 17:45:06.403047 3205 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" Sep 4 17:45:06.403226 kubelet[3205]: E0904 17:45:06.403214 3205 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e"} Sep 4 17:45:06.403407 kubelet[3205]: E0904 17:45:06.403341 3205 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a3dec762-fd79-4f26-8e9e-de588a8c2d0d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:45:06.403407 kubelet[3205]: E0904 17:45:06.403388 3205 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a3dec762-fd79-4f26-8e9e-de588a8c2d0d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-mngb6" podUID="a3dec762-fd79-4f26-8e9e-de588a8c2d0d" Sep 4 17:45:06.406838 containerd[1715]: time="2024-09-04T17:45:06.406799094Z" level=error msg="StopPodSandbox for \"d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5\" failed" error="failed to destroy network for sandbox \"d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:45:06.407035 kubelet[3205]: E0904 17:45:06.407006 3205 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" Sep 4 17:45:06.407107 kubelet[3205]: E0904 17:45:06.407057 3205 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5"} Sep 4 17:45:06.407159 kubelet[3205]: E0904 17:45:06.407106 3205 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"be0ad617-90e9-4ff4-80e2-c29502c9b417\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:45:06.407159 kubelet[3205]: E0904 17:45:06.407142 3205 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"be0ad617-90e9-4ff4-80e2-c29502c9b417\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cm46f" podUID="be0ad617-90e9-4ff4-80e2-c29502c9b417" Sep 4 17:45:06.408451 containerd[1715]: time="2024-09-04T17:45:06.408413733Z" level=error msg="StopPodSandbox for \"1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266\" failed" error="failed to destroy network for sandbox \"1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:45:06.408604 kubelet[3205]: E0904 17:45:06.408578 3205 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" Sep 4 17:45:06.408667 kubelet[3205]: E0904 17:45:06.408609 3205 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266"} Sep 4 17:45:06.408667 kubelet[3205]: E0904 17:45:06.408654 3205 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ee5bb778-32b6-45f7-a2da-cca7834e3b0b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:45:06.408771 kubelet[3205]: E0904 17:45:06.408690 3205 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ee5bb778-32b6-45f7-a2da-cca7834e3b0b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7dfdb78649-8k6wh" podUID="ee5bb778-32b6-45f7-a2da-cca7834e3b0b" Sep 4 17:45:12.339739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3447884124.mount: Deactivated successfully. Sep 4 17:45:12.390323 containerd[1715]: time="2024-09-04T17:45:12.390269433Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:45:12.392416 containerd[1715]: time="2024-09-04T17:45:12.392352582Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Sep 4 17:45:12.395869 containerd[1715]: time="2024-09-04T17:45:12.395816065Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:45:12.399960 containerd[1715]: time="2024-09-04T17:45:12.399893962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:45:12.400926 containerd[1715]: time="2024-09-04T17:45:12.400460376Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 7.076757628s" Sep 4 17:45:12.400926 containerd[1715]: time="2024-09-04T17:45:12.400500676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Sep 4 17:45:12.417340 containerd[1715]: time="2024-09-04T17:45:12.415818442Z" level=info msg="CreateContainer within sandbox \"47ba4ca3b5c3c7cd90b3635c47ed65bcdb4b4add28a3f16518962ad398ea3825\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 17:45:12.456239 containerd[1715]: time="2024-09-04T17:45:12.456190404Z" level=info msg="CreateContainer within sandbox \"47ba4ca3b5c3c7cd90b3635c47ed65bcdb4b4add28a3f16518962ad398ea3825\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bbfaebe09bb0b289d258e2aa0a6d5939a8c55190951e0ab8b6a30aba84c10060\"" Sep 4 17:45:12.456971 containerd[1715]: time="2024-09-04T17:45:12.456733917Z" level=info msg="StartContainer for \"bbfaebe09bb0b289d258e2aa0a6d5939a8c55190951e0ab8b6a30aba84c10060\"" Sep 4 17:45:12.486116 systemd[1]: Started cri-containerd-bbfaebe09bb0b289d258e2aa0a6d5939a8c55190951e0ab8b6a30aba84c10060.scope - libcontainer container bbfaebe09bb0b289d258e2aa0a6d5939a8c55190951e0ab8b6a30aba84c10060. Sep 4 17:45:12.516962 containerd[1715]: time="2024-09-04T17:45:12.516757048Z" level=info msg="StartContainer for \"bbfaebe09bb0b289d258e2aa0a6d5939a8c55190951e0ab8b6a30aba84c10060\" returns successfully" Sep 4 17:45:12.991536 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 17:45:12.991699 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 17:45:14.616966 kernel: bpftool[4375]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 17:45:14.983944 systemd-networkd[1497]: vxlan.calico: Link UP Sep 4 17:45:14.983958 systemd-networkd[1497]: vxlan.calico: Gained carrier Sep 4 17:45:16.863105 systemd-networkd[1497]: vxlan.calico: Gained IPv6LL Sep 4 17:45:17.180896 containerd[1715]: time="2024-09-04T17:45:17.179316806Z" level=info msg="StopPodSandbox for \"d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5\"" Sep 4 17:45:17.222163 kubelet[3205]: I0904 17:45:17.220048 3205 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-pg9qn" podStartSLOduration=6.9825128119999995 podStartE2EDuration="25.219992575s" podCreationTimestamp="2024-09-04 17:44:52 +0000 UTC" firstStartedPulling="2024-09-04 17:44:54.163375522 +0000 UTC m=+19.607541685" lastFinishedPulling="2024-09-04 17:45:12.400855185 +0000 UTC m=+37.845021448" observedRunningTime="2024-09-04 17:45:13.397361342 +0000 UTC m=+38.841527505" watchObservedRunningTime="2024-09-04 17:45:17.219992575 +0000 UTC m=+42.664158738" Sep 4 17:45:17.253994 containerd[1715]: 2024-09-04 17:45:17.219 [INFO][4476] k8s.go 608: Cleaning up netns ContainerID="d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" Sep 4 17:45:17.253994 containerd[1715]: 2024-09-04 17:45:17.220 [INFO][4476] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" iface="eth0" netns="/var/run/netns/cni-e82ba460-e38e-9ef4-c108-895531709824" Sep 4 17:45:17.253994 containerd[1715]: 2024-09-04 17:45:17.220 [INFO][4476] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" iface="eth0" netns="/var/run/netns/cni-e82ba460-e38e-9ef4-c108-895531709824" Sep 4 17:45:17.253994 containerd[1715]: 2024-09-04 17:45:17.221 [INFO][4476] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" iface="eth0" netns="/var/run/netns/cni-e82ba460-e38e-9ef4-c108-895531709824" Sep 4 17:45:17.253994 containerd[1715]: 2024-09-04 17:45:17.221 [INFO][4476] k8s.go 615: Releasing IP address(es) ContainerID="d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" Sep 4 17:45:17.253994 containerd[1715]: 2024-09-04 17:45:17.221 [INFO][4476] utils.go 188: Calico CNI releasing IP address ContainerID="d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" Sep 4 17:45:17.253994 containerd[1715]: 2024-09-04 17:45:17.242 [INFO][4482] ipam_plugin.go 417: Releasing address using handleID ContainerID="d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" HandleID="k8s-pod-network.d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" Workload="ci--4054.1.0--a--c31d97b133-k8s-csi--node--driver--cm46f-eth0" Sep 4 17:45:17.253994 containerd[1715]: 2024-09-04 17:45:17.242 [INFO][4482] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:45:17.253994 containerd[1715]: 2024-09-04 17:45:17.242 [INFO][4482] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:45:17.253994 containerd[1715]: 2024-09-04 17:45:17.249 [WARNING][4482] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" HandleID="k8s-pod-network.d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" Workload="ci--4054.1.0--a--c31d97b133-k8s-csi--node--driver--cm46f-eth0" Sep 4 17:45:17.253994 containerd[1715]: 2024-09-04 17:45:17.249 [INFO][4482] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" HandleID="k8s-pod-network.d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" Workload="ci--4054.1.0--a--c31d97b133-k8s-csi--node--driver--cm46f-eth0" Sep 4 17:45:17.253994 containerd[1715]: 2024-09-04 17:45:17.250 [INFO][4482] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:45:17.253994 containerd[1715]: 2024-09-04 17:45:17.252 [INFO][4476] k8s.go 621: Teardown processing complete. ContainerID="d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" Sep 4 17:45:17.255241 containerd[1715]: time="2024-09-04T17:45:17.255017110Z" level=info msg="TearDown network for sandbox \"d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5\" successfully" Sep 4 17:45:17.255241 containerd[1715]: time="2024-09-04T17:45:17.255054611Z" level=info msg="StopPodSandbox for \"d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5\" returns successfully" Sep 4 17:45:17.258249 containerd[1715]: time="2024-09-04T17:45:17.258221887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cm46f,Uid:be0ad617-90e9-4ff4-80e2-c29502c9b417,Namespace:calico-system,Attempt:1,}" Sep 4 17:45:17.259863 systemd[1]: run-netns-cni\x2de82ba460\x2de38e\x2d9ef4\x2dc108\x2d895531709824.mount: Deactivated successfully. Sep 4 17:45:17.394077 systemd-networkd[1497]: calicdc675e327b: Link UP Sep 4 17:45:17.394885 systemd-networkd[1497]: calicdc675e327b: Gained carrier Sep 4 17:45:17.406000 containerd[1715]: 2024-09-04 17:45:17.329 [INFO][4489] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054.1.0--a--c31d97b133-k8s-csi--node--driver--cm46f-eth0 csi-node-driver- calico-system be0ad617-90e9-4ff4-80e2-c29502c9b417 705 0 2024-09-04 17:44:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4054.1.0-a-c31d97b133 csi-node-driver-cm46f eth0 default [] [] [kns.calico-system ksa.calico-system.default] calicdc675e327b [] []}} ContainerID="26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea" Namespace="calico-system" Pod="csi-node-driver-cm46f" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-csi--node--driver--cm46f-" Sep 4 17:45:17.406000 containerd[1715]: 2024-09-04 17:45:17.329 [INFO][4489] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea" Namespace="calico-system" Pod="csi-node-driver-cm46f" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-csi--node--driver--cm46f-eth0" Sep 4 17:45:17.406000 containerd[1715]: 2024-09-04 17:45:17.355 [INFO][4501] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea" HandleID="k8s-pod-network.26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea" Workload="ci--4054.1.0--a--c31d97b133-k8s-csi--node--driver--cm46f-eth0" Sep 4 17:45:17.406000 containerd[1715]: 2024-09-04 17:45:17.363 [INFO][4501] ipam_plugin.go 270: Auto assigning IP ContainerID="26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea" HandleID="k8s-pod-network.26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea" Workload="ci--4054.1.0--a--c31d97b133-k8s-csi--node--driver--cm46f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050730), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4054.1.0-a-c31d97b133", "pod":"csi-node-driver-cm46f", "timestamp":"2024-09-04 17:45:17.355167598 +0000 UTC"}, Hostname:"ci-4054.1.0-a-c31d97b133", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:45:17.406000 containerd[1715]: 2024-09-04 17:45:17.363 [INFO][4501] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:45:17.406000 containerd[1715]: 2024-09-04 17:45:17.363 [INFO][4501] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:45:17.406000 containerd[1715]: 2024-09-04 17:45:17.363 [INFO][4501] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054.1.0-a-c31d97b133' Sep 4 17:45:17.406000 containerd[1715]: 2024-09-04 17:45:17.365 [INFO][4501] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea" host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:17.406000 containerd[1715]: 2024-09-04 17:45:17.369 [INFO][4501] ipam.go 372: Looking up existing affinities for host host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:17.406000 containerd[1715]: 2024-09-04 17:45:17.376 [INFO][4501] ipam.go 489: Trying affinity for 192.168.91.64/26 host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:17.406000 containerd[1715]: 2024-09-04 17:45:17.377 [INFO][4501] ipam.go 155: Attempting to load block cidr=192.168.91.64/26 host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:17.406000 containerd[1715]: 2024-09-04 17:45:17.379 [INFO][4501] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.91.64/26 host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:17.406000 containerd[1715]: 2024-09-04 17:45:17.379 [INFO][4501] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.91.64/26 handle="k8s-pod-network.26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea" host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:17.406000 containerd[1715]: 2024-09-04 17:45:17.380 [INFO][4501] ipam.go 1685: Creating new handle: k8s-pod-network.26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea Sep 4 17:45:17.406000 containerd[1715]: 2024-09-04 17:45:17.383 [INFO][4501] ipam.go 1203: Writing block in order to claim IPs block=192.168.91.64/26 handle="k8s-pod-network.26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea" host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:17.406000 containerd[1715]: 2024-09-04 17:45:17.387 [INFO][4501] ipam.go 1216: Successfully claimed IPs: [192.168.91.65/26] block=192.168.91.64/26 handle="k8s-pod-network.26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea" host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:17.406000 containerd[1715]: 2024-09-04 17:45:17.387 [INFO][4501] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.91.65/26] handle="k8s-pod-network.26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea" host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:17.406000 containerd[1715]: 2024-09-04 17:45:17.387 [INFO][4501] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:45:17.406000 containerd[1715]: 2024-09-04 17:45:17.387 [INFO][4501] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.91.65/26] IPv6=[] ContainerID="26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea" HandleID="k8s-pod-network.26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea" Workload="ci--4054.1.0--a--c31d97b133-k8s-csi--node--driver--cm46f-eth0" Sep 4 17:45:17.408221 containerd[1715]: 2024-09-04 17:45:17.389 [INFO][4489] k8s.go 386: Populated endpoint ContainerID="26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea" Namespace="calico-system" Pod="csi-node-driver-cm46f" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-csi--node--driver--cm46f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--c31d97b133-k8s-csi--node--driver--cm46f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"be0ad617-90e9-4ff4-80e2-c29502c9b417", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 44, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-c31d97b133", ContainerID:"", Pod:"csi-node-driver-cm46f", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.91.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calicdc675e327b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:45:17.408221 containerd[1715]: 2024-09-04 17:45:17.389 [INFO][4489] k8s.go 387: Calico CNI using IPs: [192.168.91.65/32] ContainerID="26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea" Namespace="calico-system" Pod="csi-node-driver-cm46f" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-csi--node--driver--cm46f-eth0" Sep 4 17:45:17.408221 containerd[1715]: 2024-09-04 17:45:17.389 [INFO][4489] dataplane_linux.go 68: Setting the host side veth name to calicdc675e327b ContainerID="26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea" Namespace="calico-system" Pod="csi-node-driver-cm46f" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-csi--node--driver--cm46f-eth0" Sep 4 17:45:17.408221 containerd[1715]: 2024-09-04 17:45:17.391 [INFO][4489] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea" Namespace="calico-system" Pod="csi-node-driver-cm46f" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-csi--node--driver--cm46f-eth0" Sep 4 17:45:17.408221 containerd[1715]: 2024-09-04 17:45:17.391 [INFO][4489] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea" Namespace="calico-system" Pod="csi-node-driver-cm46f" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-csi--node--driver--cm46f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--c31d97b133-k8s-csi--node--driver--cm46f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"be0ad617-90e9-4ff4-80e2-c29502c9b417", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 44, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-c31d97b133", ContainerID:"26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea", Pod:"csi-node-driver-cm46f", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.91.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calicdc675e327b", MAC:"ea:d4:10:6a:bd:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:45:17.408221 containerd[1715]: 2024-09-04 17:45:17.403 [INFO][4489] k8s.go 500: Wrote updated endpoint to datastore ContainerID="26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea" Namespace="calico-system" Pod="csi-node-driver-cm46f" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-csi--node--driver--cm46f-eth0" Sep 4 17:45:17.442735 containerd[1715]: time="2024-09-04T17:45:17.442395277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:45:17.442735 containerd[1715]: time="2024-09-04T17:45:17.442472079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:45:17.442735 containerd[1715]: time="2024-09-04T17:45:17.442492680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:45:17.443037 containerd[1715]: time="2024-09-04T17:45:17.442679984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:45:17.474118 systemd[1]: Started cri-containerd-26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea.scope - libcontainer container 26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea. Sep 4 17:45:17.495439 containerd[1715]: time="2024-09-04T17:45:17.495396941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cm46f,Uid:be0ad617-90e9-4ff4-80e2-c29502c9b417,Namespace:calico-system,Attempt:1,} returns sandbox id \"26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea\"" Sep 4 17:45:17.498732 containerd[1715]: time="2024-09-04T17:45:17.498698520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 17:45:18.177649 containerd[1715]: time="2024-09-04T17:45:18.177592505Z" level=info msg="StopPodSandbox for \"1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266\"" Sep 4 17:45:18.179195 containerd[1715]: time="2024-09-04T17:45:18.179158842Z" level=info msg="StopPodSandbox for \"a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099\"" Sep 4 17:45:18.259473 systemd[1]: run-containerd-runc-k8s.io-26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea-runc.vj2QDd.mount: Deactivated successfully. Sep 4 17:45:18.294785 containerd[1715]: 2024-09-04 17:45:18.236 [INFO][4589] k8s.go 608: Cleaning up netns ContainerID="1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" Sep 4 17:45:18.294785 containerd[1715]: 2024-09-04 17:45:18.237 [INFO][4589] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" iface="eth0" netns="/var/run/netns/cni-d620f5cb-bba1-4407-0e45-a5d50907425a" Sep 4 17:45:18.294785 containerd[1715]: 2024-09-04 17:45:18.238 [INFO][4589] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" iface="eth0" netns="/var/run/netns/cni-d620f5cb-bba1-4407-0e45-a5d50907425a" Sep 4 17:45:18.294785 containerd[1715]: 2024-09-04 17:45:18.238 [INFO][4589] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" iface="eth0" netns="/var/run/netns/cni-d620f5cb-bba1-4407-0e45-a5d50907425a" Sep 4 17:45:18.294785 containerd[1715]: 2024-09-04 17:45:18.238 [INFO][4589] k8s.go 615: Releasing IP address(es) ContainerID="1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" Sep 4 17:45:18.294785 containerd[1715]: 2024-09-04 17:45:18.238 [INFO][4589] utils.go 188: Calico CNI releasing IP address ContainerID="1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" Sep 4 17:45:18.294785 containerd[1715]: 2024-09-04 17:45:18.278 [INFO][4601] ipam_plugin.go 417: Releasing address using handleID ContainerID="1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" HandleID="k8s-pod-network.1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" Workload="ci--4054.1.0--a--c31d97b133-k8s-calico--kube--controllers--7dfdb78649--8k6wh-eth0" Sep 4 17:45:18.294785 containerd[1715]: 2024-09-04 17:45:18.278 [INFO][4601] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:45:18.294785 containerd[1715]: 2024-09-04 17:45:18.278 [INFO][4601] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:45:18.294785 containerd[1715]: 2024-09-04 17:45:18.287 [WARNING][4601] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" HandleID="k8s-pod-network.1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" Workload="ci--4054.1.0--a--c31d97b133-k8s-calico--kube--controllers--7dfdb78649--8k6wh-eth0" Sep 4 17:45:18.294785 containerd[1715]: 2024-09-04 17:45:18.287 [INFO][4601] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" HandleID="k8s-pod-network.1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" Workload="ci--4054.1.0--a--c31d97b133-k8s-calico--kube--controllers--7dfdb78649--8k6wh-eth0" Sep 4 17:45:18.294785 containerd[1715]: 2024-09-04 17:45:18.290 [INFO][4601] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:45:18.294785 containerd[1715]: 2024-09-04 17:45:18.291 [INFO][4589] k8s.go 621: Teardown processing complete. ContainerID="1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" Sep 4 17:45:18.294785 containerd[1715]: time="2024-09-04T17:45:18.293170560Z" level=info msg="TearDown network for sandbox \"1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266\" successfully" Sep 4 17:45:18.294785 containerd[1715]: time="2024-09-04T17:45:18.293199461Z" level=info msg="StopPodSandbox for \"1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266\" returns successfully" Sep 4 17:45:18.294785 containerd[1715]: time="2024-09-04T17:45:18.293825976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dfdb78649-8k6wh,Uid:ee5bb778-32b6-45f7-a2da-cca7834e3b0b,Namespace:calico-system,Attempt:1,}" Sep 4 17:45:18.299276 systemd[1]: run-netns-cni\x2dd620f5cb\x2dbba1\x2d4407\x2d0e45\x2da5d50907425a.mount: Deactivated successfully. Sep 4 17:45:18.309319 containerd[1715]: 2024-09-04 17:45:18.243 [INFO][4588] k8s.go 608: Cleaning up netns ContainerID="a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" Sep 4 17:45:18.309319 containerd[1715]: 2024-09-04 17:45:18.243 [INFO][4588] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" iface="eth0" netns="/var/run/netns/cni-f3a85ae3-184c-1909-8866-5997d9f79571" Sep 4 17:45:18.309319 containerd[1715]: 2024-09-04 17:45:18.245 [INFO][4588] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" iface="eth0" netns="/var/run/netns/cni-f3a85ae3-184c-1909-8866-5997d9f79571" Sep 4 17:45:18.309319 containerd[1715]: 2024-09-04 17:45:18.246 [INFO][4588] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" iface="eth0" netns="/var/run/netns/cni-f3a85ae3-184c-1909-8866-5997d9f79571" Sep 4 17:45:18.309319 containerd[1715]: 2024-09-04 17:45:18.246 [INFO][4588] k8s.go 615: Releasing IP address(es) ContainerID="a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" Sep 4 17:45:18.309319 containerd[1715]: 2024-09-04 17:45:18.246 [INFO][4588] utils.go 188: Calico CNI releasing IP address ContainerID="a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" Sep 4 17:45:18.309319 containerd[1715]: 2024-09-04 17:45:18.285 [INFO][4605] ipam_plugin.go 417: Releasing address using handleID ContainerID="a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" HandleID="k8s-pod-network.a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" Workload="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--z98pm-eth0" Sep 4 17:45:18.309319 containerd[1715]: 2024-09-04 17:45:18.286 [INFO][4605] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:45:18.309319 containerd[1715]: 2024-09-04 17:45:18.290 [INFO][4605] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:45:18.309319 containerd[1715]: 2024-09-04 17:45:18.305 [WARNING][4605] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" HandleID="k8s-pod-network.a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" Workload="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--z98pm-eth0" Sep 4 17:45:18.309319 containerd[1715]: 2024-09-04 17:45:18.305 [INFO][4605] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" HandleID="k8s-pod-network.a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" Workload="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--z98pm-eth0" Sep 4 17:45:18.309319 containerd[1715]: 2024-09-04 17:45:18.306 [INFO][4605] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:45:18.309319 containerd[1715]: 2024-09-04 17:45:18.307 [INFO][4588] k8s.go 621: Teardown processing complete. ContainerID="a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" Sep 4 17:45:18.311262 containerd[1715]: time="2024-09-04T17:45:18.309479250Z" level=info msg="TearDown network for sandbox \"a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099\" successfully" Sep 4 17:45:18.311262 containerd[1715]: time="2024-09-04T17:45:18.309512151Z" level=info msg="StopPodSandbox for \"a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099\" returns successfully" Sep 4 17:45:18.312926 systemd[1]: run-netns-cni\x2df3a85ae3\x2d184c\x2d1909\x2d8866\x2d5997d9f79571.mount: Deactivated successfully. Sep 4 17:45:18.314245 containerd[1715]: time="2024-09-04T17:45:18.314207963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z98pm,Uid:e7fb02c2-be95-4a70-bfdb-d1ac140a6478,Namespace:kube-system,Attempt:1,}" Sep 4 17:45:18.509248 systemd-networkd[1497]: cali26f758b48c3: Link UP Sep 4 17:45:18.509518 systemd-networkd[1497]: cali26f758b48c3: Gained carrier Sep 4 17:45:18.529524 containerd[1715]: 2024-09-04 17:45:18.424 [INFO][4614] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054.1.0--a--c31d97b133-k8s-calico--kube--controllers--7dfdb78649--8k6wh-eth0 calico-kube-controllers-7dfdb78649- calico-system ee5bb778-32b6-45f7-a2da-cca7834e3b0b 714 0 2024-09-04 17:44:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7dfdb78649 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4054.1.0-a-c31d97b133 calico-kube-controllers-7dfdb78649-8k6wh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali26f758b48c3 [] []}} ContainerID="07cca7eac9750828ba2b200d911ff08889482d9e1294dfda5f98fc8fdee14534" Namespace="calico-system" Pod="calico-kube-controllers-7dfdb78649-8k6wh" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-calico--kube--controllers--7dfdb78649--8k6wh-" Sep 4 17:45:18.529524 containerd[1715]: 2024-09-04 17:45:18.424 [INFO][4614] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="07cca7eac9750828ba2b200d911ff08889482d9e1294dfda5f98fc8fdee14534" Namespace="calico-system" Pod="calico-kube-controllers-7dfdb78649-8k6wh" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-calico--kube--controllers--7dfdb78649--8k6wh-eth0" Sep 4 17:45:18.529524 containerd[1715]: 2024-09-04 17:45:18.467 [INFO][4639] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07cca7eac9750828ba2b200d911ff08889482d9e1294dfda5f98fc8fdee14534" HandleID="k8s-pod-network.07cca7eac9750828ba2b200d911ff08889482d9e1294dfda5f98fc8fdee14534" Workload="ci--4054.1.0--a--c31d97b133-k8s-calico--kube--controllers--7dfdb78649--8k6wh-eth0" Sep 4 17:45:18.529524 containerd[1715]: 2024-09-04 17:45:18.481 [INFO][4639] ipam_plugin.go 270: Auto assigning IP ContainerID="07cca7eac9750828ba2b200d911ff08889482d9e1294dfda5f98fc8fdee14534" HandleID="k8s-pod-network.07cca7eac9750828ba2b200d911ff08889482d9e1294dfda5f98fc8fdee14534" Workload="ci--4054.1.0--a--c31d97b133-k8s-calico--kube--controllers--7dfdb78649--8k6wh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e5e00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4054.1.0-a-c31d97b133", "pod":"calico-kube-controllers-7dfdb78649-8k6wh", "timestamp":"2024-09-04 17:45:18.46784814 +0000 UTC"}, Hostname:"ci-4054.1.0-a-c31d97b133", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:45:18.529524 containerd[1715]: 2024-09-04 17:45:18.481 [INFO][4639] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:45:18.529524 containerd[1715]: 2024-09-04 17:45:18.481 [INFO][4639] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:45:18.529524 containerd[1715]: 2024-09-04 17:45:18.482 [INFO][4639] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054.1.0-a-c31d97b133' Sep 4 17:45:18.529524 containerd[1715]: 2024-09-04 17:45:18.483 [INFO][4639] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.07cca7eac9750828ba2b200d911ff08889482d9e1294dfda5f98fc8fdee14534" host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:18.529524 containerd[1715]: 2024-09-04 17:45:18.486 [INFO][4639] ipam.go 372: Looking up existing affinities for host host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:18.529524 containerd[1715]: 2024-09-04 17:45:18.490 [INFO][4639] ipam.go 489: Trying affinity for 192.168.91.64/26 host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:18.529524 containerd[1715]: 2024-09-04 17:45:18.492 [INFO][4639] ipam.go 155: Attempting to load block cidr=192.168.91.64/26 host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:18.529524 containerd[1715]: 2024-09-04 17:45:18.493 [INFO][4639] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.91.64/26 host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:18.529524 containerd[1715]: 2024-09-04 17:45:18.493 [INFO][4639] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.91.64/26 handle="k8s-pod-network.07cca7eac9750828ba2b200d911ff08889482d9e1294dfda5f98fc8fdee14534" host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:18.529524 containerd[1715]: 2024-09-04 17:45:18.495 [INFO][4639] ipam.go 1685: Creating new handle: k8s-pod-network.07cca7eac9750828ba2b200d911ff08889482d9e1294dfda5f98fc8fdee14534 Sep 4 17:45:18.529524 containerd[1715]: 2024-09-04 17:45:18.497 [INFO][4639] ipam.go 1203: Writing block in order to claim IPs block=192.168.91.64/26 handle="k8s-pod-network.07cca7eac9750828ba2b200d911ff08889482d9e1294dfda5f98fc8fdee14534" host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:18.529524 containerd[1715]: 2024-09-04 17:45:18.502 [INFO][4639] ipam.go 1216: Successfully claimed IPs: [192.168.91.66/26] block=192.168.91.64/26 handle="k8s-pod-network.07cca7eac9750828ba2b200d911ff08889482d9e1294dfda5f98fc8fdee14534" host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:18.529524 containerd[1715]: 2024-09-04 17:45:18.503 [INFO][4639] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.91.66/26] handle="k8s-pod-network.07cca7eac9750828ba2b200d911ff08889482d9e1294dfda5f98fc8fdee14534" host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:18.529524 containerd[1715]: 2024-09-04 17:45:18.503 [INFO][4639] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:45:18.529524 containerd[1715]: 2024-09-04 17:45:18.503 [INFO][4639] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.91.66/26] IPv6=[] ContainerID="07cca7eac9750828ba2b200d911ff08889482d9e1294dfda5f98fc8fdee14534" HandleID="k8s-pod-network.07cca7eac9750828ba2b200d911ff08889482d9e1294dfda5f98fc8fdee14534" Workload="ci--4054.1.0--a--c31d97b133-k8s-calico--kube--controllers--7dfdb78649--8k6wh-eth0" Sep 4 17:45:18.530689 containerd[1715]: 2024-09-04 17:45:18.505 [INFO][4614] k8s.go 386: Populated endpoint ContainerID="07cca7eac9750828ba2b200d911ff08889482d9e1294dfda5f98fc8fdee14534" Namespace="calico-system" Pod="calico-kube-controllers-7dfdb78649-8k6wh" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-calico--kube--controllers--7dfdb78649--8k6wh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--c31d97b133-k8s-calico--kube--controllers--7dfdb78649--8k6wh-eth0", GenerateName:"calico-kube-controllers-7dfdb78649-", Namespace:"calico-system", SelfLink:"", UID:"ee5bb778-32b6-45f7-a2da-cca7834e3b0b", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 44, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dfdb78649", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-c31d97b133", ContainerID:"", Pod:"calico-kube-controllers-7dfdb78649-8k6wh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali26f758b48c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:45:18.530689 containerd[1715]: 2024-09-04 17:45:18.505 [INFO][4614] k8s.go 387: Calico CNI using IPs: [192.168.91.66/32] ContainerID="07cca7eac9750828ba2b200d911ff08889482d9e1294dfda5f98fc8fdee14534" Namespace="calico-system" Pod="calico-kube-controllers-7dfdb78649-8k6wh" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-calico--kube--controllers--7dfdb78649--8k6wh-eth0" Sep 4 17:45:18.530689 containerd[1715]: 2024-09-04 17:45:18.505 [INFO][4614] dataplane_linux.go 68: Setting the host side veth name to cali26f758b48c3 ContainerID="07cca7eac9750828ba2b200d911ff08889482d9e1294dfda5f98fc8fdee14534" Namespace="calico-system" Pod="calico-kube-controllers-7dfdb78649-8k6wh" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-calico--kube--controllers--7dfdb78649--8k6wh-eth0" Sep 4 17:45:18.530689 containerd[1715]: 2024-09-04 17:45:18.509 [INFO][4614] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="07cca7eac9750828ba2b200d911ff08889482d9e1294dfda5f98fc8fdee14534" Namespace="calico-system" Pod="calico-kube-controllers-7dfdb78649-8k6wh" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-calico--kube--controllers--7dfdb78649--8k6wh-eth0" Sep 4 17:45:18.530689 containerd[1715]: 2024-09-04 17:45:18.509 [INFO][4614] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="07cca7eac9750828ba2b200d911ff08889482d9e1294dfda5f98fc8fdee14534" Namespace="calico-system" Pod="calico-kube-controllers-7dfdb78649-8k6wh" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-calico--kube--controllers--7dfdb78649--8k6wh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--c31d97b133-k8s-calico--kube--controllers--7dfdb78649--8k6wh-eth0", GenerateName:"calico-kube-controllers-7dfdb78649-", Namespace:"calico-system", SelfLink:"", UID:"ee5bb778-32b6-45f7-a2da-cca7834e3b0b", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 44, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dfdb78649", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-c31d97b133", ContainerID:"07cca7eac9750828ba2b200d911ff08889482d9e1294dfda5f98fc8fdee14534", Pod:"calico-kube-controllers-7dfdb78649-8k6wh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali26f758b48c3", MAC:"0a:58:a3:af:82:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:45:18.530689 containerd[1715]: 2024-09-04 17:45:18.526 [INFO][4614] k8s.go 500: Wrote updated endpoint to datastore ContainerID="07cca7eac9750828ba2b200d911ff08889482d9e1294dfda5f98fc8fdee14534" Namespace="calico-system" Pod="calico-kube-controllers-7dfdb78649-8k6wh" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-calico--kube--controllers--7dfdb78649--8k6wh-eth0" Sep 4 17:45:18.568654 systemd-networkd[1497]: cali75c897fa6d3: Link UP Sep 4 17:45:18.570067 systemd-networkd[1497]: cali75c897fa6d3: Gained carrier Sep 4 17:45:18.576290 containerd[1715]: time="2024-09-04T17:45:18.575669721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:45:18.576290 containerd[1715]: time="2024-09-04T17:45:18.575726423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:45:18.576290 containerd[1715]: time="2024-09-04T17:45:18.575760823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:45:18.576290 containerd[1715]: time="2024-09-04T17:45:18.575891127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:45:18.598737 containerd[1715]: 2024-09-04 17:45:18.427 [INFO][4618] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--z98pm-eth0 coredns-76f75df574- kube-system e7fb02c2-be95-4a70-bfdb-d1ac140a6478 715 0 2024-09-04 17:44:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4054.1.0-a-c31d97b133 coredns-76f75df574-z98pm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali75c897fa6d3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0c18f818951ab1dd4534fff396a2cf5c0e36f3cfd984bcaf27338c09f7026294" Namespace="kube-system" Pod="coredns-76f75df574-z98pm" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--z98pm-" Sep 4 17:45:18.598737 containerd[1715]: 2024-09-04 17:45:18.427 [INFO][4618] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0c18f818951ab1dd4534fff396a2cf5c0e36f3cfd984bcaf27338c09f7026294" Namespace="kube-system" Pod="coredns-76f75df574-z98pm" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--z98pm-eth0" Sep 4 17:45:18.598737 containerd[1715]: 2024-09-04 17:45:18.470 [INFO][4640] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c18f818951ab1dd4534fff396a2cf5c0e36f3cfd984bcaf27338c09f7026294" HandleID="k8s-pod-network.0c18f818951ab1dd4534fff396a2cf5c0e36f3cfd984bcaf27338c09f7026294" Workload="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--z98pm-eth0" Sep 4 17:45:18.598737 containerd[1715]: 2024-09-04 17:45:18.482 [INFO][4640] ipam_plugin.go 270: Auto assigning IP ContainerID="0c18f818951ab1dd4534fff396a2cf5c0e36f3cfd984bcaf27338c09f7026294" HandleID="k8s-pod-network.0c18f818951ab1dd4534fff396a2cf5c0e36f3cfd984bcaf27338c09f7026294" Workload="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--z98pm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000378620), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4054.1.0-a-c31d97b133", "pod":"coredns-76f75df574-z98pm", "timestamp":"2024-09-04 17:45:18.470846312 +0000 UTC"}, Hostname:"ci-4054.1.0-a-c31d97b133", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:45:18.598737 containerd[1715]: 2024-09-04 17:45:18.482 [INFO][4640] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:45:18.598737 containerd[1715]: 2024-09-04 17:45:18.503 [INFO][4640] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:45:18.598737 containerd[1715]: 2024-09-04 17:45:18.503 [INFO][4640] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054.1.0-a-c31d97b133' Sep 4 17:45:18.598737 containerd[1715]: 2024-09-04 17:45:18.505 [INFO][4640] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0c18f818951ab1dd4534fff396a2cf5c0e36f3cfd984bcaf27338c09f7026294" host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:18.598737 containerd[1715]: 2024-09-04 17:45:18.520 [INFO][4640] ipam.go 372: Looking up existing affinities for host host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:18.598737 containerd[1715]: 2024-09-04 17:45:18.530 [INFO][4640] ipam.go 489: Trying affinity for 192.168.91.64/26 host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:18.598737 containerd[1715]: 2024-09-04 17:45:18.532 [INFO][4640] ipam.go 155: Attempting to load block cidr=192.168.91.64/26 host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:18.598737 containerd[1715]: 2024-09-04 17:45:18.538 [INFO][4640] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.91.64/26 host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:18.598737 containerd[1715]: 2024-09-04 17:45:18.538 [INFO][4640] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.91.64/26 handle="k8s-pod-network.0c18f818951ab1dd4534fff396a2cf5c0e36f3cfd984bcaf27338c09f7026294" host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:18.598737 containerd[1715]: 2024-09-04 17:45:18.541 [INFO][4640] ipam.go 1685: Creating new handle: k8s-pod-network.0c18f818951ab1dd4534fff396a2cf5c0e36f3cfd984bcaf27338c09f7026294 Sep 4 17:45:18.598737 containerd[1715]: 2024-09-04 17:45:18.547 [INFO][4640] ipam.go 1203: Writing block in order to claim IPs block=192.168.91.64/26 handle="k8s-pod-network.0c18f818951ab1dd4534fff396a2cf5c0e36f3cfd984bcaf27338c09f7026294" host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:18.598737 containerd[1715]: 2024-09-04 17:45:18.557 [INFO][4640] ipam.go 1216: Successfully claimed IPs: [192.168.91.67/26] block=192.168.91.64/26 handle="k8s-pod-network.0c18f818951ab1dd4534fff396a2cf5c0e36f3cfd984bcaf27338c09f7026294" host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:18.598737 containerd[1715]: 2024-09-04 17:45:18.558 [INFO][4640] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.91.67/26] handle="k8s-pod-network.0c18f818951ab1dd4534fff396a2cf5c0e36f3cfd984bcaf27338c09f7026294" host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:18.598737 containerd[1715]: 2024-09-04 17:45:18.559 [INFO][4640] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:45:18.598737 containerd[1715]: 2024-09-04 17:45:18.559 [INFO][4640] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.91.67/26] IPv6=[] ContainerID="0c18f818951ab1dd4534fff396a2cf5c0e36f3cfd984bcaf27338c09f7026294" HandleID="k8s-pod-network.0c18f818951ab1dd4534fff396a2cf5c0e36f3cfd984bcaf27338c09f7026294" Workload="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--z98pm-eth0" Sep 4 17:45:18.599716 containerd[1715]: 2024-09-04 17:45:18.561 [INFO][4618] k8s.go 386: Populated endpoint ContainerID="0c18f818951ab1dd4534fff396a2cf5c0e36f3cfd984bcaf27338c09f7026294" Namespace="kube-system" Pod="coredns-76f75df574-z98pm" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--z98pm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--z98pm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e7fb02c2-be95-4a70-bfdb-d1ac140a6478", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 44, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-c31d97b133", ContainerID:"", Pod:"coredns-76f75df574-z98pm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali75c897fa6d3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:45:18.599716 containerd[1715]: 2024-09-04 17:45:18.562 [INFO][4618] k8s.go 387: Calico CNI using IPs: [192.168.91.67/32] ContainerID="0c18f818951ab1dd4534fff396a2cf5c0e36f3cfd984bcaf27338c09f7026294" Namespace="kube-system" Pod="coredns-76f75df574-z98pm" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--z98pm-eth0" Sep 4 17:45:18.599716 containerd[1715]: 2024-09-04 17:45:18.562 [INFO][4618] dataplane_linux.go 68: Setting the host side veth name to cali75c897fa6d3 ContainerID="0c18f818951ab1dd4534fff396a2cf5c0e36f3cfd984bcaf27338c09f7026294" Namespace="kube-system" Pod="coredns-76f75df574-z98pm" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--z98pm-eth0" Sep 4 17:45:18.599716 containerd[1715]: 2024-09-04 17:45:18.566 [INFO][4618] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0c18f818951ab1dd4534fff396a2cf5c0e36f3cfd984bcaf27338c09f7026294" Namespace="kube-system" Pod="coredns-76f75df574-z98pm" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--z98pm-eth0" Sep 4 17:45:18.599716 containerd[1715]: 2024-09-04 17:45:18.567 [INFO][4618] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0c18f818951ab1dd4534fff396a2cf5c0e36f3cfd984bcaf27338c09f7026294" Namespace="kube-system" Pod="coredns-76f75df574-z98pm" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--z98pm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--z98pm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e7fb02c2-be95-4a70-bfdb-d1ac140a6478", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 44, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-c31d97b133", ContainerID:"0c18f818951ab1dd4534fff396a2cf5c0e36f3cfd984bcaf27338c09f7026294", Pod:"coredns-76f75df574-z98pm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali75c897fa6d3", MAC:"72:ab:87:58:04:88", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:45:18.599716 containerd[1715]: 2024-09-04 17:45:18.591 [INFO][4618] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0c18f818951ab1dd4534fff396a2cf5c0e36f3cfd984bcaf27338c09f7026294" Namespace="kube-system" Pod="coredns-76f75df574-z98pm" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--z98pm-eth0" Sep 4 17:45:18.623135 systemd[1]: Started cri-containerd-07cca7eac9750828ba2b200d911ff08889482d9e1294dfda5f98fc8fdee14534.scope - libcontainer container 07cca7eac9750828ba2b200d911ff08889482d9e1294dfda5f98fc8fdee14534. Sep 4 17:45:18.681338 containerd[1715]: time="2024-09-04T17:45:18.681255248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:45:18.681579 containerd[1715]: time="2024-09-04T17:45:18.681546255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:45:18.681842 containerd[1715]: time="2024-09-04T17:45:18.681756760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:45:18.682473 containerd[1715]: time="2024-09-04T17:45:18.682364575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:45:18.709139 systemd[1]: Started cri-containerd-0c18f818951ab1dd4534fff396a2cf5c0e36f3cfd984bcaf27338c09f7026294.scope - libcontainer container 0c18f818951ab1dd4534fff396a2cf5c0e36f3cfd984bcaf27338c09f7026294. Sep 4 17:45:18.733423 containerd[1715]: time="2024-09-04T17:45:18.733032488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dfdb78649-8k6wh,Uid:ee5bb778-32b6-45f7-a2da-cca7834e3b0b,Namespace:calico-system,Attempt:1,} returns sandbox id \"07cca7eac9750828ba2b200d911ff08889482d9e1294dfda5f98fc8fdee14534\"" Sep 4 17:45:18.772471 containerd[1715]: time="2024-09-04T17:45:18.772043522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z98pm,Uid:e7fb02c2-be95-4a70-bfdb-d1ac140a6478,Namespace:kube-system,Attempt:1,} returns sandbox id \"0c18f818951ab1dd4534fff396a2cf5c0e36f3cfd984bcaf27338c09f7026294\"" Sep 4 17:45:18.783331 containerd[1715]: time="2024-09-04T17:45:18.783207789Z" level=info msg="CreateContainer within sandbox \"0c18f818951ab1dd4534fff396a2cf5c0e36f3cfd984bcaf27338c09f7026294\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:45:18.830460 containerd[1715]: time="2024-09-04T17:45:18.830414019Z" level=info msg="CreateContainer within sandbox \"0c18f818951ab1dd4534fff396a2cf5c0e36f3cfd984bcaf27338c09f7026294\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b568f9ab7668b7476990631d0426e31b5dc5046262bfe1c9cd32691e6d0a16ef\"" Sep 4 17:45:18.832795 containerd[1715]: time="2024-09-04T17:45:18.832626072Z" level=info msg="StartContainer for \"b568f9ab7668b7476990631d0426e31b5dc5046262bfe1c9cd32691e6d0a16ef\"" Sep 4 17:45:18.875370 systemd[1]: Started cri-containerd-b568f9ab7668b7476990631d0426e31b5dc5046262bfe1c9cd32691e6d0a16ef.scope - libcontainer container b568f9ab7668b7476990631d0426e31b5dc5046262bfe1c9cd32691e6d0a16ef. Sep 4 17:45:18.926847 containerd[1715]: time="2024-09-04T17:45:18.926792225Z" level=info msg="StartContainer for \"b568f9ab7668b7476990631d0426e31b5dc5046262bfe1c9cd32691e6d0a16ef\" returns successfully" Sep 4 17:45:18.971900 containerd[1715]: time="2024-09-04T17:45:18.971839904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:45:18.974092 containerd[1715]: time="2024-09-04T17:45:18.974042456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Sep 4 17:45:18.978466 containerd[1715]: time="2024-09-04T17:45:18.978407061Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:45:18.983120 containerd[1715]: time="2024-09-04T17:45:18.983083173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:45:18.984256 containerd[1715]: time="2024-09-04T17:45:18.983692087Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.484781563s" Sep 4 17:45:18.984256 containerd[1715]: time="2024-09-04T17:45:18.983731188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Sep 4 17:45:18.985021 containerd[1715]: time="2024-09-04T17:45:18.984988418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 17:45:18.985944 containerd[1715]: time="2024-09-04T17:45:18.985901840Z" level=info msg="CreateContainer within sandbox \"26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 17:45:19.025040 containerd[1715]: time="2024-09-04T17:45:19.024862573Z" level=info msg="CreateContainer within sandbox \"26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5f7040c6e3229a2ccb81627c0d1a4e23db5ea1b8278e734eb82b7bbb0687faf0\"" Sep 4 17:45:19.025984 containerd[1715]: time="2024-09-04T17:45:19.025929298Z" level=info msg="StartContainer for \"5f7040c6e3229a2ccb81627c0d1a4e23db5ea1b8278e734eb82b7bbb0687faf0\"" Sep 4 17:45:19.053507 systemd[1]: Started cri-containerd-5f7040c6e3229a2ccb81627c0d1a4e23db5ea1b8278e734eb82b7bbb0687faf0.scope - libcontainer container 5f7040c6e3229a2ccb81627c0d1a4e23db5ea1b8278e734eb82b7bbb0687faf0. Sep 4 17:45:19.082050 containerd[1715]: time="2024-09-04T17:45:19.081916138Z" level=info msg="StartContainer for \"5f7040c6e3229a2ccb81627c0d1a4e23db5ea1b8278e734eb82b7bbb0687faf0\" returns successfully" Sep 4 17:45:19.411094 kubelet[3205]: I0904 17:45:19.410875 3205 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-z98pm" podStartSLOduration=33.410829411 podStartE2EDuration="33.410829411s" podCreationTimestamp="2024-09-04 17:44:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:45:19.395247938 +0000 UTC m=+44.839414201" watchObservedRunningTime="2024-09-04 17:45:19.410829411 +0000 UTC m=+44.854995574" Sep 4 17:45:19.423199 systemd-networkd[1497]: calicdc675e327b: Gained IPv6LL Sep 4 17:45:19.807104 systemd-networkd[1497]: cali75c897fa6d3: Gained IPv6LL Sep 4 17:45:20.127092 systemd-networkd[1497]: cali26f758b48c3: Gained IPv6LL Sep 4 17:45:20.179286 containerd[1715]: time="2024-09-04T17:45:20.177877571Z" level=info msg="StopPodSandbox for \"0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e\"" Sep 4 17:45:20.249821 containerd[1715]: 2024-09-04 17:45:20.219 [INFO][4853] k8s.go 608: Cleaning up netns ContainerID="0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" Sep 4 17:45:20.249821 containerd[1715]: 2024-09-04 17:45:20.219 [INFO][4853] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" iface="eth0" netns="/var/run/netns/cni-21c35366-688e-2ea6-6b49-2cd2b96d3ca2" Sep 4 17:45:20.249821 containerd[1715]: 2024-09-04 17:45:20.219 [INFO][4853] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" iface="eth0" netns="/var/run/netns/cni-21c35366-688e-2ea6-6b49-2cd2b96d3ca2" Sep 4 17:45:20.249821 containerd[1715]: 2024-09-04 17:45:20.220 [INFO][4853] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" iface="eth0" netns="/var/run/netns/cni-21c35366-688e-2ea6-6b49-2cd2b96d3ca2" Sep 4 17:45:20.249821 containerd[1715]: 2024-09-04 17:45:20.220 [INFO][4853] k8s.go 615: Releasing IP address(es) ContainerID="0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" Sep 4 17:45:20.249821 containerd[1715]: 2024-09-04 17:45:20.220 [INFO][4853] utils.go 188: Calico CNI releasing IP address ContainerID="0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" Sep 4 17:45:20.249821 containerd[1715]: 2024-09-04 17:45:20.241 [INFO][4860] ipam_plugin.go 417: Releasing address using handleID ContainerID="0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" HandleID="k8s-pod-network.0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" Workload="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--mngb6-eth0" Sep 4 17:45:20.249821 containerd[1715]: 2024-09-04 17:45:20.241 [INFO][4860] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:45:20.249821 containerd[1715]: 2024-09-04 17:45:20.241 [INFO][4860] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:45:20.249821 containerd[1715]: 2024-09-04 17:45:20.246 [WARNING][4860] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" HandleID="k8s-pod-network.0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" Workload="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--mngb6-eth0" Sep 4 17:45:20.249821 containerd[1715]: 2024-09-04 17:45:20.246 [INFO][4860] ipam_plugin.go 445: Releasing address using workloadID ContainerID="0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" HandleID="k8s-pod-network.0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" Workload="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--mngb6-eth0" Sep 4 17:45:20.249821 containerd[1715]: 2024-09-04 17:45:20.247 [INFO][4860] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:45:20.249821 containerd[1715]: 2024-09-04 17:45:20.248 [INFO][4853] k8s.go 621: Teardown processing complete. ContainerID="0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" Sep 4 17:45:20.252101 containerd[1715]: time="2024-09-04T17:45:20.252060746Z" level=info msg="TearDown network for sandbox \"0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e\" successfully" Sep 4 17:45:20.252101 containerd[1715]: time="2024-09-04T17:45:20.252100047Z" level=info msg="StopPodSandbox for \"0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e\" returns successfully" Sep 4 17:45:20.252837 containerd[1715]: time="2024-09-04T17:45:20.252804664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mngb6,Uid:a3dec762-fd79-4f26-8e9e-de588a8c2d0d,Namespace:kube-system,Attempt:1,}" Sep 4 17:45:20.254672 systemd[1]: run-netns-cni\x2d21c35366\x2d688e\x2d2ea6\x2d6b49\x2d2cd2b96d3ca2.mount: Deactivated successfully. Sep 4 17:45:20.407061 systemd-networkd[1497]: cali6b5f06870bf: Link UP Sep 4 17:45:20.409858 systemd-networkd[1497]: cali6b5f06870bf: Gained carrier Sep 4 17:45:20.427495 containerd[1715]: 2024-09-04 17:45:20.346 [INFO][4874] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--mngb6-eth0 coredns-76f75df574- kube-system a3dec762-fd79-4f26-8e9e-de588a8c2d0d 742 0 2024-09-04 17:44:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4054.1.0-a-c31d97b133 coredns-76f75df574-mngb6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6b5f06870bf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="93c4b42f43b3c5c1a2dc510b3277824987e0d747d022b175cb44337eadaa1880" Namespace="kube-system" Pod="coredns-76f75df574-mngb6" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--mngb6-" Sep 4 17:45:20.427495 containerd[1715]: 2024-09-04 17:45:20.347 [INFO][4874] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="93c4b42f43b3c5c1a2dc510b3277824987e0d747d022b175cb44337eadaa1880" Namespace="kube-system" Pod="coredns-76f75df574-mngb6" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--mngb6-eth0" Sep 4 17:45:20.427495 containerd[1715]: 2024-09-04 17:45:20.371 [INFO][4885] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="93c4b42f43b3c5c1a2dc510b3277824987e0d747d022b175cb44337eadaa1880" HandleID="k8s-pod-network.93c4b42f43b3c5c1a2dc510b3277824987e0d747d022b175cb44337eadaa1880" Workload="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--mngb6-eth0" Sep 4 17:45:20.427495 containerd[1715]: 2024-09-04 17:45:20.379 [INFO][4885] ipam_plugin.go 270: Auto assigning IP ContainerID="93c4b42f43b3c5c1a2dc510b3277824987e0d747d022b175cb44337eadaa1880" HandleID="k8s-pod-network.93c4b42f43b3c5c1a2dc510b3277824987e0d747d022b175cb44337eadaa1880" Workload="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--mngb6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ee570), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4054.1.0-a-c31d97b133", "pod":"coredns-76f75df574-mngb6", "timestamp":"2024-09-04 17:45:20.371237699 +0000 UTC"}, Hostname:"ci-4054.1.0-a-c31d97b133", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:45:20.427495 containerd[1715]: 2024-09-04 17:45:20.379 [INFO][4885] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:45:20.427495 containerd[1715]: 2024-09-04 17:45:20.379 [INFO][4885] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:45:20.427495 containerd[1715]: 2024-09-04 17:45:20.379 [INFO][4885] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054.1.0-a-c31d97b133' Sep 4 17:45:20.427495 containerd[1715]: 2024-09-04 17:45:20.380 [INFO][4885] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.93c4b42f43b3c5c1a2dc510b3277824987e0d747d022b175cb44337eadaa1880" host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:20.427495 containerd[1715]: 2024-09-04 17:45:20.383 [INFO][4885] ipam.go 372: Looking up existing affinities for host host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:20.427495 containerd[1715]: 2024-09-04 17:45:20.387 [INFO][4885] ipam.go 489: Trying affinity for 192.168.91.64/26 host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:20.427495 containerd[1715]: 2024-09-04 17:45:20.388 [INFO][4885] ipam.go 155: Attempting to load block cidr=192.168.91.64/26 host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:20.427495 containerd[1715]: 2024-09-04 17:45:20.391 [INFO][4885] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.91.64/26 host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:20.427495 containerd[1715]: 2024-09-04 17:45:20.391 [INFO][4885] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.91.64/26 handle="k8s-pod-network.93c4b42f43b3c5c1a2dc510b3277824987e0d747d022b175cb44337eadaa1880" host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:20.427495 containerd[1715]: 2024-09-04 17:45:20.392 [INFO][4885] ipam.go 1685: Creating new handle: k8s-pod-network.93c4b42f43b3c5c1a2dc510b3277824987e0d747d022b175cb44337eadaa1880 Sep 4 17:45:20.427495 containerd[1715]: 2024-09-04 17:45:20.395 [INFO][4885] ipam.go 1203: Writing block in order to claim IPs block=192.168.91.64/26 handle="k8s-pod-network.93c4b42f43b3c5c1a2dc510b3277824987e0d747d022b175cb44337eadaa1880" host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:20.427495 containerd[1715]: 2024-09-04 17:45:20.401 [INFO][4885] ipam.go 1216: Successfully claimed IPs: [192.168.91.68/26] block=192.168.91.64/26 handle="k8s-pod-network.93c4b42f43b3c5c1a2dc510b3277824987e0d747d022b175cb44337eadaa1880" host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:20.427495 containerd[1715]: 2024-09-04 17:45:20.401 [INFO][4885] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.91.68/26] handle="k8s-pod-network.93c4b42f43b3c5c1a2dc510b3277824987e0d747d022b175cb44337eadaa1880" host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:20.427495 containerd[1715]: 2024-09-04 17:45:20.401 [INFO][4885] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:45:20.427495 containerd[1715]: 2024-09-04 17:45:20.401 [INFO][4885] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.91.68/26] IPv6=[] ContainerID="93c4b42f43b3c5c1a2dc510b3277824987e0d747d022b175cb44337eadaa1880" HandleID="k8s-pod-network.93c4b42f43b3c5c1a2dc510b3277824987e0d747d022b175cb44337eadaa1880" Workload="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--mngb6-eth0" Sep 4 17:45:20.429844 containerd[1715]: 2024-09-04 17:45:20.402 [INFO][4874] k8s.go 386: Populated endpoint ContainerID="93c4b42f43b3c5c1a2dc510b3277824987e0d747d022b175cb44337eadaa1880" Namespace="kube-system" Pod="coredns-76f75df574-mngb6" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--mngb6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--mngb6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a3dec762-fd79-4f26-8e9e-de588a8c2d0d", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 44, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-c31d97b133", ContainerID:"", Pod:"coredns-76f75df574-mngb6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b5f06870bf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:45:20.429844 containerd[1715]: 2024-09-04 17:45:20.403 [INFO][4874] k8s.go 387: Calico CNI using IPs: [192.168.91.68/32] ContainerID="93c4b42f43b3c5c1a2dc510b3277824987e0d747d022b175cb44337eadaa1880" Namespace="kube-system" Pod="coredns-76f75df574-mngb6" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--mngb6-eth0" Sep 4 17:45:20.429844 containerd[1715]: 2024-09-04 17:45:20.403 [INFO][4874] dataplane_linux.go 68: Setting the host side veth name to cali6b5f06870bf ContainerID="93c4b42f43b3c5c1a2dc510b3277824987e0d747d022b175cb44337eadaa1880" Namespace="kube-system" Pod="coredns-76f75df574-mngb6" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--mngb6-eth0" Sep 4 17:45:20.429844 containerd[1715]: 2024-09-04 17:45:20.405 [INFO][4874] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="93c4b42f43b3c5c1a2dc510b3277824987e0d747d022b175cb44337eadaa1880" Namespace="kube-system" Pod="coredns-76f75df574-mngb6" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--mngb6-eth0" Sep 4 17:45:20.429844 containerd[1715]: 2024-09-04 17:45:20.405 [INFO][4874] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="93c4b42f43b3c5c1a2dc510b3277824987e0d747d022b175cb44337eadaa1880" Namespace="kube-system" Pod="coredns-76f75df574-mngb6" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--mngb6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--mngb6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a3dec762-fd79-4f26-8e9e-de588a8c2d0d", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 44, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-c31d97b133", ContainerID:"93c4b42f43b3c5c1a2dc510b3277824987e0d747d022b175cb44337eadaa1880", Pod:"coredns-76f75df574-mngb6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b5f06870bf", MAC:"a6:a4:e4:6e:4c:ba", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:45:20.429844 containerd[1715]: 2024-09-04 17:45:20.425 [INFO][4874] k8s.go 500: Wrote updated endpoint to datastore ContainerID="93c4b42f43b3c5c1a2dc510b3277824987e0d747d022b175cb44337eadaa1880" Namespace="kube-system" Pod="coredns-76f75df574-mngb6" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--mngb6-eth0" Sep 4 17:45:20.457634 containerd[1715]: time="2024-09-04T17:45:20.457104754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:45:20.457634 containerd[1715]: time="2024-09-04T17:45:20.457356560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:45:20.457634 containerd[1715]: time="2024-09-04T17:45:20.457388161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:45:20.457634 containerd[1715]: time="2024-09-04T17:45:20.457488563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:45:20.486079 systemd[1]: Started cri-containerd-93c4b42f43b3c5c1a2dc510b3277824987e0d747d022b175cb44337eadaa1880.scope - libcontainer container 93c4b42f43b3c5c1a2dc510b3277824987e0d747d022b175cb44337eadaa1880. Sep 4 17:45:20.530163 containerd[1715]: time="2024-09-04T17:45:20.530118502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mngb6,Uid:a3dec762-fd79-4f26-8e9e-de588a8c2d0d,Namespace:kube-system,Attempt:1,} returns sandbox id \"93c4b42f43b3c5c1a2dc510b3277824987e0d747d022b175cb44337eadaa1880\"" Sep 4 17:45:20.540682 containerd[1715]: time="2024-09-04T17:45:20.540644254Z" level=info msg="CreateContainer within sandbox \"93c4b42f43b3c5c1a2dc510b3277824987e0d747d022b175cb44337eadaa1880\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:45:20.586745 containerd[1715]: time="2024-09-04T17:45:20.586705056Z" level=info msg="CreateContainer within sandbox \"93c4b42f43b3c5c1a2dc510b3277824987e0d747d022b175cb44337eadaa1880\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"49dedb7f3c37a454027653e3131109ab02fd13c0bcc655875a2adcbcdcaff138\"" Sep 4 17:45:20.587334 containerd[1715]: time="2024-09-04T17:45:20.587285770Z" level=info msg="StartContainer for \"49dedb7f3c37a454027653e3131109ab02fd13c0bcc655875a2adcbcdcaff138\"" Sep 4 17:45:20.617127 systemd[1]: Started cri-containerd-49dedb7f3c37a454027653e3131109ab02fd13c0bcc655875a2adcbcdcaff138.scope - libcontainer container 49dedb7f3c37a454027653e3131109ab02fd13c0bcc655875a2adcbcdcaff138. Sep 4 17:45:20.651188 containerd[1715]: time="2024-09-04T17:45:20.651146599Z" level=info msg="StartContainer for \"49dedb7f3c37a454027653e3131109ab02fd13c0bcc655875a2adcbcdcaff138\" returns successfully" Sep 4 17:45:21.408656 kubelet[3205]: I0904 17:45:21.408085 3205 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-mngb6" podStartSLOduration=35.408018515 podStartE2EDuration="35.408018515s" podCreationTimestamp="2024-09-04 17:44:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:45:21.407377899 +0000 UTC m=+46.851544062" watchObservedRunningTime="2024-09-04 17:45:21.408018515 +0000 UTC m=+46.852184778" Sep 4 17:45:21.727393 systemd-networkd[1497]: cali6b5f06870bf: Gained IPv6LL Sep 4 17:45:22.009269 containerd[1715]: time="2024-09-04T17:45:22.009137603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:45:22.011234 containerd[1715]: time="2024-09-04T17:45:22.011170151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Sep 4 17:45:22.015697 containerd[1715]: time="2024-09-04T17:45:22.015636758Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:45:22.025206 containerd[1715]: time="2024-09-04T17:45:22.025150586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:45:22.026574 containerd[1715]: time="2024-09-04T17:45:22.025844102Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 3.040814683s" Sep 4 17:45:22.026574 containerd[1715]: time="2024-09-04T17:45:22.025883603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Sep 4 17:45:22.027652 containerd[1715]: time="2024-09-04T17:45:22.027626045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 17:45:22.048830 containerd[1715]: time="2024-09-04T17:45:22.048591947Z" level=info msg="CreateContainer within sandbox \"07cca7eac9750828ba2b200d911ff08889482d9e1294dfda5f98fc8fdee14534\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 17:45:22.090596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount900874856.mount: Deactivated successfully. Sep 4 17:45:22.100533 containerd[1715]: time="2024-09-04T17:45:22.100490489Z" level=info msg="CreateContainer within sandbox \"07cca7eac9750828ba2b200d911ff08889482d9e1294dfda5f98fc8fdee14534\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9fca42869381a858ede7c4270853d4b108806991356985982cf5dc27925e2c16\"" Sep 4 17:45:22.101334 containerd[1715]: time="2024-09-04T17:45:22.101056903Z" level=info msg="StartContainer for \"9fca42869381a858ede7c4270853d4b108806991356985982cf5dc27925e2c16\"" Sep 4 17:45:22.130129 systemd[1]: Started cri-containerd-9fca42869381a858ede7c4270853d4b108806991356985982cf5dc27925e2c16.scope - libcontainer container 9fca42869381a858ede7c4270853d4b108806991356985982cf5dc27925e2c16. Sep 4 17:45:22.177481 containerd[1715]: time="2024-09-04T17:45:22.177319928Z" level=info msg="StartContainer for \"9fca42869381a858ede7c4270853d4b108806991356985982cf5dc27925e2c16\" returns successfully" Sep 4 17:45:22.414759 kubelet[3205]: I0904 17:45:22.414631 3205 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7dfdb78649-8k6wh" podStartSLOduration=27.122568012 podStartE2EDuration="30.414580307s" podCreationTimestamp="2024-09-04 17:44:52 +0000 UTC" firstStartedPulling="2024-09-04 17:45:18.734956934 +0000 UTC m=+44.179123097" lastFinishedPulling="2024-09-04 17:45:22.026969229 +0000 UTC m=+47.471135392" observedRunningTime="2024-09-04 17:45:22.412749763 +0000 UTC m=+47.856915926" watchObservedRunningTime="2024-09-04 17:45:22.414580307 +0000 UTC m=+47.858746470" Sep 4 17:45:23.743309 containerd[1715]: time="2024-09-04T17:45:23.743245809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:45:23.747782 containerd[1715]: time="2024-09-04T17:45:23.747721016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Sep 4 17:45:23.752400 containerd[1715]: time="2024-09-04T17:45:23.752366227Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:45:23.758523 containerd[1715]: time="2024-09-04T17:45:23.758447873Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:45:23.761297 containerd[1715]: time="2024-09-04T17:45:23.760999734Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.733335788s" Sep 4 17:45:23.761297 containerd[1715]: time="2024-09-04T17:45:23.761051235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Sep 4 17:45:23.766723 containerd[1715]: time="2024-09-04T17:45:23.766686770Z" level=info msg="CreateContainer within sandbox \"26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 17:45:23.812596 containerd[1715]: time="2024-09-04T17:45:23.812550968Z" level=info msg="CreateContainer within sandbox \"26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ed68b4bc85fccc30483fd03498b4c43bcb44730bdacde6cf52696d26d2f2cec5\"" Sep 4 17:45:23.813309 containerd[1715]: time="2024-09-04T17:45:23.813162583Z" level=info msg="StartContainer for \"ed68b4bc85fccc30483fd03498b4c43bcb44730bdacde6cf52696d26d2f2cec5\"" Sep 4 17:45:23.857111 systemd[1]: Started cri-containerd-ed68b4bc85fccc30483fd03498b4c43bcb44730bdacde6cf52696d26d2f2cec5.scope - libcontainer container ed68b4bc85fccc30483fd03498b4c43bcb44730bdacde6cf52696d26d2f2cec5. Sep 4 17:45:23.892740 containerd[1715]: time="2024-09-04T17:45:23.892678586Z" level=info msg="StartContainer for \"ed68b4bc85fccc30483fd03498b4c43bcb44730bdacde6cf52696d26d2f2cec5\" returns successfully" Sep 4 17:45:24.309861 kubelet[3205]: I0904 17:45:24.309648 3205 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 17:45:24.309861 kubelet[3205]: I0904 17:45:24.309704 3205 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 17:45:27.163748 kubelet[3205]: I0904 17:45:27.162749 3205 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-cm46f" podStartSLOduration=28.898505912 podStartE2EDuration="35.162700672s" podCreationTimestamp="2024-09-04 17:44:52 +0000 UTC" firstStartedPulling="2024-09-04 17:45:17.497920301 +0000 UTC m=+42.942086564" lastFinishedPulling="2024-09-04 17:45:23.762115061 +0000 UTC m=+49.206281324" observedRunningTime="2024-09-04 17:45:24.425390036 +0000 UTC m=+49.869556199" watchObservedRunningTime="2024-09-04 17:45:27.162700672 +0000 UTC m=+52.606866935" Sep 4 17:45:32.352149 kubelet[3205]: I0904 17:45:32.351371 3205 topology_manager.go:215] "Topology Admit Handler" podUID="56ecccd1-17d9-4ffe-8d93-a294d4f76e4f" podNamespace="calico-apiserver" podName="calico-apiserver-78f648d64b-jtm9k" Sep 4 17:45:32.360782 systemd[1]: Created slice kubepods-besteffort-pod56ecccd1_17d9_4ffe_8d93_a294d4f76e4f.slice - libcontainer container kubepods-besteffort-pod56ecccd1_17d9_4ffe_8d93_a294d4f76e4f.slice. Sep 4 17:45:32.364535 kubelet[3205]: I0904 17:45:32.364340 3205 topology_manager.go:215] "Topology Admit Handler" podUID="e30609b2-4c00-4afd-91eb-54ca25d0743d" podNamespace="calico-apiserver" podName="calico-apiserver-78f648d64b-ps4g4" Sep 4 17:45:32.374782 systemd[1]: Created slice kubepods-besteffort-pode30609b2_4c00_4afd_91eb_54ca25d0743d.slice - libcontainer container kubepods-besteffort-pode30609b2_4c00_4afd_91eb_54ca25d0743d.slice. Sep 4 17:45:32.482912 kubelet[3205]: I0904 17:45:32.482874 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e30609b2-4c00-4afd-91eb-54ca25d0743d-calico-apiserver-certs\") pod \"calico-apiserver-78f648d64b-ps4g4\" (UID: \"e30609b2-4c00-4afd-91eb-54ca25d0743d\") " pod="calico-apiserver/calico-apiserver-78f648d64b-ps4g4" Sep 4 17:45:32.482912 kubelet[3205]: I0904 17:45:32.482951 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/56ecccd1-17d9-4ffe-8d93-a294d4f76e4f-calico-apiserver-certs\") pod \"calico-apiserver-78f648d64b-jtm9k\" (UID: \"56ecccd1-17d9-4ffe-8d93-a294d4f76e4f\") " pod="calico-apiserver/calico-apiserver-78f648d64b-jtm9k" Sep 4 17:45:32.483194 kubelet[3205]: I0904 17:45:32.483045 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgt82\" (UniqueName: \"kubernetes.io/projected/e30609b2-4c00-4afd-91eb-54ca25d0743d-kube-api-access-xgt82\") pod \"calico-apiserver-78f648d64b-ps4g4\" (UID: \"e30609b2-4c00-4afd-91eb-54ca25d0743d\") " pod="calico-apiserver/calico-apiserver-78f648d64b-ps4g4" Sep 4 17:45:32.483194 kubelet[3205]: I0904 17:45:32.483087 3205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtrn2\" (UniqueName: \"kubernetes.io/projected/56ecccd1-17d9-4ffe-8d93-a294d4f76e4f-kube-api-access-gtrn2\") pod \"calico-apiserver-78f648d64b-jtm9k\" (UID: \"56ecccd1-17d9-4ffe-8d93-a294d4f76e4f\") " pod="calico-apiserver/calico-apiserver-78f648d64b-jtm9k" Sep 4 17:45:32.666965 containerd[1715]: time="2024-09-04T17:45:32.666813641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78f648d64b-jtm9k,Uid:56ecccd1-17d9-4ffe-8d93-a294d4f76e4f,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:45:32.679155 containerd[1715]: time="2024-09-04T17:45:32.679117630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78f648d64b-ps4g4,Uid:e30609b2-4c00-4afd-91eb-54ca25d0743d,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:45:32.856175 systemd-networkd[1497]: calie46c28d5dfe: Link UP Sep 4 17:45:32.857525 systemd-networkd[1497]: calie46c28d5dfe: Gained carrier Sep 4 17:45:32.880771 containerd[1715]: 2024-09-04 17:45:32.777 [INFO][5137] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054.1.0--a--c31d97b133-k8s-calico--apiserver--78f648d64b--jtm9k-eth0 calico-apiserver-78f648d64b- calico-apiserver 56ecccd1-17d9-4ffe-8d93-a294d4f76e4f 849 0 2024-09-04 17:45:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78f648d64b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4054.1.0-a-c31d97b133 calico-apiserver-78f648d64b-jtm9k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie46c28d5dfe [] []}} ContainerID="7e02a5afc3b33f517709f658ea44f27fef345e458ba9962453dcb29d4095647a" Namespace="calico-apiserver" Pod="calico-apiserver-78f648d64b-jtm9k" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-calico--apiserver--78f648d64b--jtm9k-" Sep 4 17:45:32.880771 containerd[1715]: 2024-09-04 17:45:32.777 [INFO][5137] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7e02a5afc3b33f517709f658ea44f27fef345e458ba9962453dcb29d4095647a" Namespace="calico-apiserver" Pod="calico-apiserver-78f648d64b-jtm9k" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-calico--apiserver--78f648d64b--jtm9k-eth0" Sep 4 17:45:32.880771 containerd[1715]: 2024-09-04 17:45:32.819 [INFO][5160] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e02a5afc3b33f517709f658ea44f27fef345e458ba9962453dcb29d4095647a" HandleID="k8s-pod-network.7e02a5afc3b33f517709f658ea44f27fef345e458ba9962453dcb29d4095647a" Workload="ci--4054.1.0--a--c31d97b133-k8s-calico--apiserver--78f648d64b--jtm9k-eth0" Sep 4 17:45:32.880771 containerd[1715]: 2024-09-04 17:45:32.830 [INFO][5160] ipam_plugin.go 270: Auto assigning IP ContainerID="7e02a5afc3b33f517709f658ea44f27fef345e458ba9962453dcb29d4095647a" HandleID="k8s-pod-network.7e02a5afc3b33f517709f658ea44f27fef345e458ba9962453dcb29d4095647a" Workload="ci--4054.1.0--a--c31d97b133-k8s-calico--apiserver--78f648d64b--jtm9k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318050), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4054.1.0-a-c31d97b133", "pod":"calico-apiserver-78f648d64b-jtm9k", "timestamp":"2024-09-04 17:45:32.819764133 +0000 UTC"}, Hostname:"ci-4054.1.0-a-c31d97b133", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:45:32.880771 containerd[1715]: 2024-09-04 17:45:32.830 [INFO][5160] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:45:32.880771 containerd[1715]: 2024-09-04 17:45:32.830 [INFO][5160] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:45:32.880771 containerd[1715]: 2024-09-04 17:45:32.830 [INFO][5160] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054.1.0-a-c31d97b133' Sep 4 17:45:32.880771 containerd[1715]: 2024-09-04 17:45:32.831 [INFO][5160] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7e02a5afc3b33f517709f658ea44f27fef345e458ba9962453dcb29d4095647a" host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:32.880771 containerd[1715]: 2024-09-04 17:45:32.835 [INFO][5160] ipam.go 372: Looking up existing affinities for host host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:32.880771 containerd[1715]: 2024-09-04 17:45:32.839 [INFO][5160] ipam.go 489: Trying affinity for 192.168.91.64/26 host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:32.880771 containerd[1715]: 2024-09-04 17:45:32.840 [INFO][5160] ipam.go 155: Attempting to load block cidr=192.168.91.64/26 host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:32.880771 containerd[1715]: 2024-09-04 17:45:32.842 [INFO][5160] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.91.64/26 host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:32.880771 containerd[1715]: 2024-09-04 17:45:32.842 [INFO][5160] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.91.64/26 handle="k8s-pod-network.7e02a5afc3b33f517709f658ea44f27fef345e458ba9962453dcb29d4095647a" host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:32.880771 containerd[1715]: 2024-09-04 17:45:32.843 [INFO][5160] ipam.go 1685: Creating new handle: k8s-pod-network.7e02a5afc3b33f517709f658ea44f27fef345e458ba9962453dcb29d4095647a Sep 4 17:45:32.880771 containerd[1715]: 2024-09-04 17:45:32.846 [INFO][5160] ipam.go 1203: Writing block in order to claim IPs block=192.168.91.64/26 handle="k8s-pod-network.7e02a5afc3b33f517709f658ea44f27fef345e458ba9962453dcb29d4095647a" host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:32.880771 containerd[1715]: 2024-09-04 17:45:32.849 [INFO][5160] ipam.go 1216: Successfully claimed IPs: [192.168.91.69/26] block=192.168.91.64/26 handle="k8s-pod-network.7e02a5afc3b33f517709f658ea44f27fef345e458ba9962453dcb29d4095647a" host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:32.880771 containerd[1715]: 2024-09-04 17:45:32.849 [INFO][5160] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.91.69/26] handle="k8s-pod-network.7e02a5afc3b33f517709f658ea44f27fef345e458ba9962453dcb29d4095647a" host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:32.880771 containerd[1715]: 2024-09-04 17:45:32.850 [INFO][5160] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:45:32.880771 containerd[1715]: 2024-09-04 17:45:32.850 [INFO][5160] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.91.69/26] IPv6=[] ContainerID="7e02a5afc3b33f517709f658ea44f27fef345e458ba9962453dcb29d4095647a" HandleID="k8s-pod-network.7e02a5afc3b33f517709f658ea44f27fef345e458ba9962453dcb29d4095647a" Workload="ci--4054.1.0--a--c31d97b133-k8s-calico--apiserver--78f648d64b--jtm9k-eth0" Sep 4 17:45:32.882278 containerd[1715]: 2024-09-04 17:45:32.851 [INFO][5137] k8s.go 386: Populated endpoint ContainerID="7e02a5afc3b33f517709f658ea44f27fef345e458ba9962453dcb29d4095647a" Namespace="calico-apiserver" Pod="calico-apiserver-78f648d64b-jtm9k" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-calico--apiserver--78f648d64b--jtm9k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--c31d97b133-k8s-calico--apiserver--78f648d64b--jtm9k-eth0", GenerateName:"calico-apiserver-78f648d64b-", Namespace:"calico-apiserver", SelfLink:"", UID:"56ecccd1-17d9-4ffe-8d93-a294d4f76e4f", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 45, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78f648d64b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-c31d97b133", ContainerID:"", Pod:"calico-apiserver-78f648d64b-jtm9k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie46c28d5dfe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:45:32.882278 containerd[1715]: 2024-09-04 17:45:32.852 [INFO][5137] k8s.go 387: Calico CNI using IPs: [192.168.91.69/32] ContainerID="7e02a5afc3b33f517709f658ea44f27fef345e458ba9962453dcb29d4095647a" Namespace="calico-apiserver" Pod="calico-apiserver-78f648d64b-jtm9k" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-calico--apiserver--78f648d64b--jtm9k-eth0" Sep 4 17:45:32.882278 containerd[1715]: 2024-09-04 17:45:32.852 [INFO][5137] dataplane_linux.go 68: Setting the host side veth name to calie46c28d5dfe ContainerID="7e02a5afc3b33f517709f658ea44f27fef345e458ba9962453dcb29d4095647a" Namespace="calico-apiserver" Pod="calico-apiserver-78f648d64b-jtm9k" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-calico--apiserver--78f648d64b--jtm9k-eth0" Sep 4 17:45:32.882278 containerd[1715]: 2024-09-04 17:45:32.859 [INFO][5137] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7e02a5afc3b33f517709f658ea44f27fef345e458ba9962453dcb29d4095647a" Namespace="calico-apiserver" Pod="calico-apiserver-78f648d64b-jtm9k" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-calico--apiserver--78f648d64b--jtm9k-eth0" Sep 4 17:45:32.882278 containerd[1715]: 2024-09-04 17:45:32.859 [INFO][5137] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7e02a5afc3b33f517709f658ea44f27fef345e458ba9962453dcb29d4095647a" Namespace="calico-apiserver" Pod="calico-apiserver-78f648d64b-jtm9k" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-calico--apiserver--78f648d64b--jtm9k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--c31d97b133-k8s-calico--apiserver--78f648d64b--jtm9k-eth0", GenerateName:"calico-apiserver-78f648d64b-", Namespace:"calico-apiserver", SelfLink:"", UID:"56ecccd1-17d9-4ffe-8d93-a294d4f76e4f", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 45, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78f648d64b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-c31d97b133", ContainerID:"7e02a5afc3b33f517709f658ea44f27fef345e458ba9962453dcb29d4095647a", Pod:"calico-apiserver-78f648d64b-jtm9k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie46c28d5dfe", MAC:"1e:56:7a:ed:e6:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:45:32.882278 containerd[1715]: 2024-09-04 17:45:32.874 [INFO][5137] k8s.go 500: Wrote updated endpoint to datastore ContainerID="7e02a5afc3b33f517709f658ea44f27fef345e458ba9962453dcb29d4095647a" Namespace="calico-apiserver" Pod="calico-apiserver-78f648d64b-jtm9k" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-calico--apiserver--78f648d64b--jtm9k-eth0" Sep 4 17:45:32.929463 containerd[1715]: time="2024-09-04T17:45:32.927552264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:45:32.929463 containerd[1715]: time="2024-09-04T17:45:32.927989975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:45:32.929463 containerd[1715]: time="2024-09-04T17:45:32.928027875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:45:32.929463 containerd[1715]: time="2024-09-04T17:45:32.928117878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:45:32.962492 systemd-networkd[1497]: cali554dd5fc3ea: Link UP Sep 4 17:45:32.962752 systemd-networkd[1497]: cali554dd5fc3ea: Gained carrier Sep 4 17:45:32.968145 systemd[1]: Started cri-containerd-7e02a5afc3b33f517709f658ea44f27fef345e458ba9962453dcb29d4095647a.scope - libcontainer container 7e02a5afc3b33f517709f658ea44f27fef345e458ba9962453dcb29d4095647a. Sep 4 17:45:32.996720 containerd[1715]: 2024-09-04 17:45:32.786 [INFO][5147] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054.1.0--a--c31d97b133-k8s-calico--apiserver--78f648d64b--ps4g4-eth0 calico-apiserver-78f648d64b- calico-apiserver e30609b2-4c00-4afd-91eb-54ca25d0743d 852 0 2024-09-04 17:45:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78f648d64b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4054.1.0-a-c31d97b133 calico-apiserver-78f648d64b-ps4g4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali554dd5fc3ea [] []}} ContainerID="3bc24cf83d243d56c60bbe6e46eef81439bd2e5589fae1e0568f67b376668764" Namespace="calico-apiserver" Pod="calico-apiserver-78f648d64b-ps4g4" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-calico--apiserver--78f648d64b--ps4g4-" Sep 4 17:45:32.996720 containerd[1715]: 2024-09-04 17:45:32.786 [INFO][5147] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3bc24cf83d243d56c60bbe6e46eef81439bd2e5589fae1e0568f67b376668764" Namespace="calico-apiserver" Pod="calico-apiserver-78f648d64b-ps4g4" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-calico--apiserver--78f648d64b--ps4g4-eth0" Sep 4 17:45:32.996720 containerd[1715]: 2024-09-04 17:45:32.827 [INFO][5164] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3bc24cf83d243d56c60bbe6e46eef81439bd2e5589fae1e0568f67b376668764" HandleID="k8s-pod-network.3bc24cf83d243d56c60bbe6e46eef81439bd2e5589fae1e0568f67b376668764" Workload="ci--4054.1.0--a--c31d97b133-k8s-calico--apiserver--78f648d64b--ps4g4-eth0" Sep 4 17:45:32.996720 containerd[1715]: 2024-09-04 17:45:32.836 [INFO][5164] ipam_plugin.go 270: Auto assigning IP ContainerID="3bc24cf83d243d56c60bbe6e46eef81439bd2e5589fae1e0568f67b376668764" HandleID="k8s-pod-network.3bc24cf83d243d56c60bbe6e46eef81439bd2e5589fae1e0568f67b376668764" Workload="ci--4054.1.0--a--c31d97b133-k8s-calico--apiserver--78f648d64b--ps4g4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000265da0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4054.1.0-a-c31d97b133", "pod":"calico-apiserver-78f648d64b-ps4g4", "timestamp":"2024-09-04 17:45:32.827694919 +0000 UTC"}, Hostname:"ci-4054.1.0-a-c31d97b133", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:45:32.996720 containerd[1715]: 2024-09-04 17:45:32.836 [INFO][5164] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:45:32.996720 containerd[1715]: 2024-09-04 17:45:32.850 [INFO][5164] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:45:32.996720 containerd[1715]: 2024-09-04 17:45:32.850 [INFO][5164] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054.1.0-a-c31d97b133' Sep 4 17:45:32.996720 containerd[1715]: 2024-09-04 17:45:32.855 [INFO][5164] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3bc24cf83d243d56c60bbe6e46eef81439bd2e5589fae1e0568f67b376668764" host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:32.996720 containerd[1715]: 2024-09-04 17:45:32.877 [INFO][5164] ipam.go 372: Looking up existing affinities for host host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:32.996720 containerd[1715]: 2024-09-04 17:45:32.890 [INFO][5164] ipam.go 489: Trying affinity for 192.168.91.64/26 host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:32.996720 containerd[1715]: 2024-09-04 17:45:32.893 [INFO][5164] ipam.go 155: Attempting to load block cidr=192.168.91.64/26 host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:32.996720 containerd[1715]: 2024-09-04 17:45:32.895 [INFO][5164] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.91.64/26 host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:32.996720 containerd[1715]: 2024-09-04 17:45:32.895 [INFO][5164] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.91.64/26 handle="k8s-pod-network.3bc24cf83d243d56c60bbe6e46eef81439bd2e5589fae1e0568f67b376668764" host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:32.996720 containerd[1715]: 2024-09-04 17:45:32.897 [INFO][5164] ipam.go 1685: Creating new handle: k8s-pod-network.3bc24cf83d243d56c60bbe6e46eef81439bd2e5589fae1e0568f67b376668764 Sep 4 17:45:32.996720 containerd[1715]: 2024-09-04 17:45:32.903 [INFO][5164] ipam.go 1203: Writing block in order to claim IPs block=192.168.91.64/26 handle="k8s-pod-network.3bc24cf83d243d56c60bbe6e46eef81439bd2e5589fae1e0568f67b376668764" host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:32.996720 containerd[1715]: 2024-09-04 17:45:32.940 [INFO][5164] ipam.go 1216: Successfully claimed IPs: [192.168.91.70/26] block=192.168.91.64/26 handle="k8s-pod-network.3bc24cf83d243d56c60bbe6e46eef81439bd2e5589fae1e0568f67b376668764" host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:32.996720 containerd[1715]: 2024-09-04 17:45:32.941 [INFO][5164] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.91.70/26] handle="k8s-pod-network.3bc24cf83d243d56c60bbe6e46eef81439bd2e5589fae1e0568f67b376668764" host="ci-4054.1.0-a-c31d97b133" Sep 4 17:45:32.996720 containerd[1715]: 2024-09-04 17:45:32.941 [INFO][5164] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:45:32.996720 containerd[1715]: 2024-09-04 17:45:32.941 [INFO][5164] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.91.70/26] IPv6=[] ContainerID="3bc24cf83d243d56c60bbe6e46eef81439bd2e5589fae1e0568f67b376668764" HandleID="k8s-pod-network.3bc24cf83d243d56c60bbe6e46eef81439bd2e5589fae1e0568f67b376668764" Workload="ci--4054.1.0--a--c31d97b133-k8s-calico--apiserver--78f648d64b--ps4g4-eth0" Sep 4 17:45:32.997741 containerd[1715]: 2024-09-04 17:45:32.953 [INFO][5147] k8s.go 386: Populated endpoint ContainerID="3bc24cf83d243d56c60bbe6e46eef81439bd2e5589fae1e0568f67b376668764" Namespace="calico-apiserver" Pod="calico-apiserver-78f648d64b-ps4g4" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-calico--apiserver--78f648d64b--ps4g4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--c31d97b133-k8s-calico--apiserver--78f648d64b--ps4g4-eth0", GenerateName:"calico-apiserver-78f648d64b-", Namespace:"calico-apiserver", SelfLink:"", UID:"e30609b2-4c00-4afd-91eb-54ca25d0743d", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 45, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78f648d64b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-c31d97b133", ContainerID:"", Pod:"calico-apiserver-78f648d64b-ps4g4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali554dd5fc3ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:45:32.997741 containerd[1715]: 2024-09-04 17:45:32.953 [INFO][5147] k8s.go 387: Calico CNI using IPs: [192.168.91.70/32] ContainerID="3bc24cf83d243d56c60bbe6e46eef81439bd2e5589fae1e0568f67b376668764" Namespace="calico-apiserver" Pod="calico-apiserver-78f648d64b-ps4g4" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-calico--apiserver--78f648d64b--ps4g4-eth0" Sep 4 17:45:32.997741 containerd[1715]: 2024-09-04 17:45:32.953 [INFO][5147] dataplane_linux.go 68: Setting the host side veth name to cali554dd5fc3ea ContainerID="3bc24cf83d243d56c60bbe6e46eef81439bd2e5589fae1e0568f67b376668764" Namespace="calico-apiserver" Pod="calico-apiserver-78f648d64b-ps4g4" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-calico--apiserver--78f648d64b--ps4g4-eth0" Sep 4 17:45:32.997741 containerd[1715]: 2024-09-04 17:45:32.959 [INFO][5147] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="3bc24cf83d243d56c60bbe6e46eef81439bd2e5589fae1e0568f67b376668764" Namespace="calico-apiserver" Pod="calico-apiserver-78f648d64b-ps4g4" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-calico--apiserver--78f648d64b--ps4g4-eth0" Sep 4 17:45:32.997741 containerd[1715]: 2024-09-04 17:45:32.963 [INFO][5147] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3bc24cf83d243d56c60bbe6e46eef81439bd2e5589fae1e0568f67b376668764" Namespace="calico-apiserver" Pod="calico-apiserver-78f648d64b-ps4g4" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-calico--apiserver--78f648d64b--ps4g4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--c31d97b133-k8s-calico--apiserver--78f648d64b--ps4g4-eth0", GenerateName:"calico-apiserver-78f648d64b-", Namespace:"calico-apiserver", SelfLink:"", UID:"e30609b2-4c00-4afd-91eb-54ca25d0743d", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 45, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78f648d64b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-c31d97b133", ContainerID:"3bc24cf83d243d56c60bbe6e46eef81439bd2e5589fae1e0568f67b376668764", Pod:"calico-apiserver-78f648d64b-ps4g4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali554dd5fc3ea", MAC:"3e:c8:00:cc:02:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:45:32.997741 containerd[1715]: 2024-09-04 17:45:32.994 [INFO][5147] k8s.go 500: Wrote updated endpoint to datastore ContainerID="3bc24cf83d243d56c60bbe6e46eef81439bd2e5589fae1e0568f67b376668764" Namespace="calico-apiserver" Pod="calico-apiserver-78f648d64b-ps4g4" WorkloadEndpoint="ci--4054.1.0--a--c31d97b133-k8s-calico--apiserver--78f648d64b--ps4g4-eth0" Sep 4 17:45:33.048735 containerd[1715]: time="2024-09-04T17:45:33.047563083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:45:33.048918 containerd[1715]: time="2024-09-04T17:45:33.048759611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:45:33.048918 containerd[1715]: time="2024-09-04T17:45:33.048799112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:45:33.049051 containerd[1715]: time="2024-09-04T17:45:33.048914415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:45:33.087567 systemd[1]: Started cri-containerd-3bc24cf83d243d56c60bbe6e46eef81439bd2e5589fae1e0568f67b376668764.scope - libcontainer container 3bc24cf83d243d56c60bbe6e46eef81439bd2e5589fae1e0568f67b376668764. Sep 4 17:45:33.172337 containerd[1715]: time="2024-09-04T17:45:33.172291212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78f648d64b-ps4g4,Uid:e30609b2-4c00-4afd-91eb-54ca25d0743d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3bc24cf83d243d56c60bbe6e46eef81439bd2e5589fae1e0568f67b376668764\"" Sep 4 17:45:33.174917 containerd[1715]: time="2024-09-04T17:45:33.174831872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:45:33.201968 containerd[1715]: time="2024-09-04T17:45:33.201803805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78f648d64b-jtm9k,Uid:56ecccd1-17d9-4ffe-8d93-a294d4f76e4f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7e02a5afc3b33f517709f658ea44f27fef345e458ba9962453dcb29d4095647a\"" Sep 4 17:45:34.207084 systemd-networkd[1497]: calie46c28d5dfe: Gained IPv6LL Sep 4 17:45:34.975162 systemd-networkd[1497]: cali554dd5fc3ea: Gained IPv6LL Sep 4 17:45:35.173165 containerd[1715]: time="2024-09-04T17:45:35.173128170Z" level=info msg="StopPodSandbox for \"d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5\"" Sep 4 17:45:35.233780 containerd[1715]: 2024-09-04 17:45:35.205 [WARNING][5291] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--c31d97b133-k8s-csi--node--driver--cm46f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"be0ad617-90e9-4ff4-80e2-c29502c9b417", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 44, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-c31d97b133", ContainerID:"26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea", Pod:"csi-node-driver-cm46f", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.91.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calicdc675e327b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:45:35.233780 containerd[1715]: 2024-09-04 17:45:35.205 [INFO][5291] k8s.go 608: Cleaning up netns ContainerID="d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" Sep 4 17:45:35.233780 containerd[1715]: 2024-09-04 17:45:35.206 [INFO][5291] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" iface="eth0" netns="" Sep 4 17:45:35.233780 containerd[1715]: 2024-09-04 17:45:35.206 [INFO][5291] k8s.go 615: Releasing IP address(es) ContainerID="d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" Sep 4 17:45:35.233780 containerd[1715]: 2024-09-04 17:45:35.206 [INFO][5291] utils.go 188: Calico CNI releasing IP address ContainerID="d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" Sep 4 17:45:35.233780 containerd[1715]: 2024-09-04 17:45:35.225 [INFO][5299] ipam_plugin.go 417: Releasing address using handleID ContainerID="d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" HandleID="k8s-pod-network.d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" Workload="ci--4054.1.0--a--c31d97b133-k8s-csi--node--driver--cm46f-eth0" Sep 4 17:45:35.233780 containerd[1715]: 2024-09-04 17:45:35.225 [INFO][5299] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:45:35.233780 containerd[1715]: 2024-09-04 17:45:35.225 [INFO][5299] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:45:35.233780 containerd[1715]: 2024-09-04 17:45:35.230 [WARNING][5299] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" HandleID="k8s-pod-network.d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" Workload="ci--4054.1.0--a--c31d97b133-k8s-csi--node--driver--cm46f-eth0" Sep 4 17:45:35.233780 containerd[1715]: 2024-09-04 17:45:35.230 [INFO][5299] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" HandleID="k8s-pod-network.d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" Workload="ci--4054.1.0--a--c31d97b133-k8s-csi--node--driver--cm46f-eth0" Sep 4 17:45:35.233780 containerd[1715]: 2024-09-04 17:45:35.231 [INFO][5299] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:45:35.233780 containerd[1715]: 2024-09-04 17:45:35.232 [INFO][5291] k8s.go 621: Teardown processing complete. ContainerID="d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" Sep 4 17:45:35.233780 containerd[1715]: time="2024-09-04T17:45:35.233669518Z" level=info msg="TearDown network for sandbox \"d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5\" successfully" Sep 4 17:45:35.233780 containerd[1715]: time="2024-09-04T17:45:35.233701019Z" level=info msg="StopPodSandbox for \"d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5\" returns successfully" Sep 4 17:45:35.235156 containerd[1715]: time="2024-09-04T17:45:35.234788945Z" level=info msg="RemovePodSandbox for \"d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5\"" Sep 4 17:45:35.235156 containerd[1715]: time="2024-09-04T17:45:35.234824346Z" level=info msg="Forcibly stopping sandbox \"d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5\"" Sep 4 17:45:35.312550 containerd[1715]: 2024-09-04 17:45:35.271 [WARNING][5317] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--c31d97b133-k8s-csi--node--driver--cm46f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"be0ad617-90e9-4ff4-80e2-c29502c9b417", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 44, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-c31d97b133", ContainerID:"26096b41a9841dab821a4150d964f6f4be5a4594e9a02202ea13dab11d09f6ea", Pod:"csi-node-driver-cm46f", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.91.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calicdc675e327b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:45:35.312550 containerd[1715]: 2024-09-04 17:45:35.271 [INFO][5317] k8s.go 608: Cleaning up netns ContainerID="d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" Sep 4 17:45:35.312550 containerd[1715]: 2024-09-04 17:45:35.272 [INFO][5317] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" iface="eth0" netns="" Sep 4 17:45:35.312550 containerd[1715]: 2024-09-04 17:45:35.272 [INFO][5317] k8s.go 615: Releasing IP address(es) ContainerID="d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" Sep 4 17:45:35.312550 containerd[1715]: 2024-09-04 17:45:35.272 [INFO][5317] utils.go 188: Calico CNI releasing IP address ContainerID="d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" Sep 4 17:45:35.312550 containerd[1715]: 2024-09-04 17:45:35.295 [INFO][5323] ipam_plugin.go 417: Releasing address using handleID ContainerID="d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" HandleID="k8s-pod-network.d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" Workload="ci--4054.1.0--a--c31d97b133-k8s-csi--node--driver--cm46f-eth0" Sep 4 17:45:35.312550 containerd[1715]: 2024-09-04 17:45:35.296 [INFO][5323] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:45:35.312550 containerd[1715]: 2024-09-04 17:45:35.296 [INFO][5323] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:45:35.312550 containerd[1715]: 2024-09-04 17:45:35.305 [WARNING][5323] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" HandleID="k8s-pod-network.d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" Workload="ci--4054.1.0--a--c31d97b133-k8s-csi--node--driver--cm46f-eth0" Sep 4 17:45:35.312550 containerd[1715]: 2024-09-04 17:45:35.305 [INFO][5323] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" HandleID="k8s-pod-network.d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" Workload="ci--4054.1.0--a--c31d97b133-k8s-csi--node--driver--cm46f-eth0" Sep 4 17:45:35.312550 containerd[1715]: 2024-09-04 17:45:35.307 [INFO][5323] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:45:35.312550 containerd[1715]: 2024-09-04 17:45:35.309 [INFO][5317] k8s.go 621: Teardown processing complete. ContainerID="d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5" Sep 4 17:45:35.313465 containerd[1715]: time="2024-09-04T17:45:35.313163419Z" level=info msg="TearDown network for sandbox \"d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5\" successfully" Sep 4 17:45:35.424125 containerd[1715]: time="2024-09-04T17:45:35.424064972Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:45:35.424290 containerd[1715]: time="2024-09-04T17:45:35.424151874Z" level=info msg="RemovePodSandbox \"d908dc1f1e547b21323884b2304b51f58f368f62fa3d9cf9159a46b00fff1ac5\" returns successfully" Sep 4 17:45:35.424963 containerd[1715]: time="2024-09-04T17:45:35.424830490Z" level=info msg="StopPodSandbox for \"a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099\"" Sep 4 17:45:35.560634 containerd[1715]: 2024-09-04 17:45:35.482 [WARNING][5351] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--z98pm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e7fb02c2-be95-4a70-bfdb-d1ac140a6478", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 44, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-c31d97b133", ContainerID:"0c18f818951ab1dd4534fff396a2cf5c0e36f3cfd984bcaf27338c09f7026294", Pod:"coredns-76f75df574-z98pm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali75c897fa6d3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:45:35.560634 containerd[1715]: 2024-09-04 17:45:35.482 [INFO][5351] k8s.go 608: Cleaning up netns ContainerID="a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" Sep 4 17:45:35.560634 containerd[1715]: 2024-09-04 17:45:35.482 [INFO][5351] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" iface="eth0" netns="" Sep 4 17:45:35.560634 containerd[1715]: 2024-09-04 17:45:35.482 [INFO][5351] k8s.go 615: Releasing IP address(es) ContainerID="a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" Sep 4 17:45:35.560634 containerd[1715]: 2024-09-04 17:45:35.482 [INFO][5351] utils.go 188: Calico CNI releasing IP address ContainerID="a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" Sep 4 17:45:35.560634 containerd[1715]: 2024-09-04 17:45:35.530 [INFO][5358] ipam_plugin.go 417: Releasing address using handleID ContainerID="a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" HandleID="k8s-pod-network.a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" Workload="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--z98pm-eth0" Sep 4 17:45:35.560634 containerd[1715]: 2024-09-04 17:45:35.530 [INFO][5358] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:45:35.560634 containerd[1715]: 2024-09-04 17:45:35.530 [INFO][5358] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:45:35.560634 containerd[1715]: 2024-09-04 17:45:35.541 [WARNING][5358] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" HandleID="k8s-pod-network.a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" Workload="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--z98pm-eth0" Sep 4 17:45:35.560634 containerd[1715]: 2024-09-04 17:45:35.542 [INFO][5358] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" HandleID="k8s-pod-network.a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" Workload="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--z98pm-eth0" Sep 4 17:45:35.560634 containerd[1715]: 2024-09-04 17:45:35.546 [INFO][5358] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:45:35.560634 containerd[1715]: 2024-09-04 17:45:35.553 [INFO][5351] k8s.go 621: Teardown processing complete. ContainerID="a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" Sep 4 17:45:35.561316 containerd[1715]: time="2024-09-04T17:45:35.560621237Z" level=info msg="TearDown network for sandbox \"a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099\" successfully" Sep 4 17:45:35.561316 containerd[1715]: time="2024-09-04T17:45:35.560651738Z" level=info msg="StopPodSandbox for \"a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099\" returns successfully" Sep 4 17:45:35.563085 containerd[1715]: time="2024-09-04T17:45:35.562599785Z" level=info msg="RemovePodSandbox for \"a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099\"" Sep 4 17:45:35.563085 containerd[1715]: time="2024-09-04T17:45:35.562674586Z" level=info msg="Forcibly stopping sandbox \"a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099\"" Sep 4 17:45:35.672143 containerd[1715]: 2024-09-04 17:45:35.625 [WARNING][5379] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--z98pm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e7fb02c2-be95-4a70-bfdb-d1ac140a6478", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 44, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-c31d97b133", ContainerID:"0c18f818951ab1dd4534fff396a2cf5c0e36f3cfd984bcaf27338c09f7026294", Pod:"coredns-76f75df574-z98pm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali75c897fa6d3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:45:35.672143 containerd[1715]: 2024-09-04 17:45:35.625 [INFO][5379] k8s.go 608: Cleaning up netns ContainerID="a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" Sep 4 17:45:35.672143 containerd[1715]: 2024-09-04 17:45:35.625 [INFO][5379] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" iface="eth0" netns="" Sep 4 17:45:35.672143 containerd[1715]: 2024-09-04 17:45:35.625 [INFO][5379] k8s.go 615: Releasing IP address(es) ContainerID="a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" Sep 4 17:45:35.672143 containerd[1715]: 2024-09-04 17:45:35.625 [INFO][5379] utils.go 188: Calico CNI releasing IP address ContainerID="a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" Sep 4 17:45:35.672143 containerd[1715]: 2024-09-04 17:45:35.658 [INFO][5385] ipam_plugin.go 417: Releasing address using handleID ContainerID="a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" HandleID="k8s-pod-network.a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" Workload="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--z98pm-eth0" Sep 4 17:45:35.672143 containerd[1715]: 2024-09-04 17:45:35.658 [INFO][5385] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:45:35.672143 containerd[1715]: 2024-09-04 17:45:35.658 [INFO][5385] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:45:35.672143 containerd[1715]: 2024-09-04 17:45:35.666 [WARNING][5385] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" HandleID="k8s-pod-network.a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" Workload="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--z98pm-eth0" Sep 4 17:45:35.672143 containerd[1715]: 2024-09-04 17:45:35.666 [INFO][5385] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" HandleID="k8s-pod-network.a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" Workload="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--z98pm-eth0" Sep 4 17:45:35.672143 containerd[1715]: 2024-09-04 17:45:35.668 [INFO][5385] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:45:35.672143 containerd[1715]: 2024-09-04 17:45:35.670 [INFO][5379] k8s.go 621: Teardown processing complete. ContainerID="a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099" Sep 4 17:45:35.672847 containerd[1715]: time="2024-09-04T17:45:35.672178205Z" level=info msg="TearDown network for sandbox \"a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099\" successfully" Sep 4 17:45:35.684628 containerd[1715]: time="2024-09-04T17:45:35.684064890Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:45:35.684628 containerd[1715]: time="2024-09-04T17:45:35.684150392Z" level=info msg="RemovePodSandbox \"a92879fe39591115f35374cc80ed34f6eed5b6e2219ab0a6a7d7656ea0b0e099\" returns successfully" Sep 4 17:45:35.685552 containerd[1715]: time="2024-09-04T17:45:35.685523524Z" level=info msg="StopPodSandbox for \"1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266\"" Sep 4 17:45:35.796662 containerd[1715]: 2024-09-04 17:45:35.740 [WARNING][5403] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--c31d97b133-k8s-calico--kube--controllers--7dfdb78649--8k6wh-eth0", GenerateName:"calico-kube-controllers-7dfdb78649-", Namespace:"calico-system", SelfLink:"", UID:"ee5bb778-32b6-45f7-a2da-cca7834e3b0b", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 44, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dfdb78649", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-c31d97b133", ContainerID:"07cca7eac9750828ba2b200d911ff08889482d9e1294dfda5f98fc8fdee14534", Pod:"calico-kube-controllers-7dfdb78649-8k6wh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali26f758b48c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:45:35.796662 containerd[1715]: 2024-09-04 17:45:35.741 [INFO][5403] k8s.go 608: Cleaning up netns ContainerID="1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" Sep 4 17:45:35.796662 containerd[1715]: 2024-09-04 17:45:35.741 [INFO][5403] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" iface="eth0" netns="" Sep 4 17:45:35.796662 containerd[1715]: 2024-09-04 17:45:35.741 [INFO][5403] k8s.go 615: Releasing IP address(es) ContainerID="1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" Sep 4 17:45:35.796662 containerd[1715]: 2024-09-04 17:45:35.741 [INFO][5403] utils.go 188: Calico CNI releasing IP address ContainerID="1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" Sep 4 17:45:35.796662 containerd[1715]: 2024-09-04 17:45:35.778 [INFO][5410] ipam_plugin.go 417: Releasing address using handleID ContainerID="1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" HandleID="k8s-pod-network.1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" Workload="ci--4054.1.0--a--c31d97b133-k8s-calico--kube--controllers--7dfdb78649--8k6wh-eth0" Sep 4 17:45:35.796662 containerd[1715]: 2024-09-04 17:45:35.779 [INFO][5410] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:45:35.796662 containerd[1715]: 2024-09-04 17:45:35.779 [INFO][5410] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:45:35.796662 containerd[1715]: 2024-09-04 17:45:35.788 [WARNING][5410] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" HandleID="k8s-pod-network.1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" Workload="ci--4054.1.0--a--c31d97b133-k8s-calico--kube--controllers--7dfdb78649--8k6wh-eth0" Sep 4 17:45:35.796662 containerd[1715]: 2024-09-04 17:45:35.788 [INFO][5410] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" HandleID="k8s-pod-network.1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" Workload="ci--4054.1.0--a--c31d97b133-k8s-calico--kube--controllers--7dfdb78649--8k6wh-eth0" Sep 4 17:45:35.796662 containerd[1715]: 2024-09-04 17:45:35.791 [INFO][5410] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:45:35.796662 containerd[1715]: 2024-09-04 17:45:35.793 [INFO][5403] k8s.go 621: Teardown processing complete. ContainerID="1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" Sep 4 17:45:35.796662 containerd[1715]: time="2024-09-04T17:45:35.796492178Z" level=info msg="TearDown network for sandbox \"1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266\" successfully" Sep 4 17:45:35.796662 containerd[1715]: time="2024-09-04T17:45:35.796526479Z" level=info msg="StopPodSandbox for \"1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266\" returns successfully" Sep 4 17:45:35.798193 containerd[1715]: time="2024-09-04T17:45:35.798100917Z" level=info msg="RemovePodSandbox for \"1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266\"" Sep 4 17:45:35.798193 containerd[1715]: time="2024-09-04T17:45:35.798135618Z" level=info msg="Forcibly stopping sandbox \"1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266\"" Sep 4 17:45:35.898002 containerd[1715]: 2024-09-04 17:45:35.851 [WARNING][5429] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--c31d97b133-k8s-calico--kube--controllers--7dfdb78649--8k6wh-eth0", GenerateName:"calico-kube-controllers-7dfdb78649-", Namespace:"calico-system", SelfLink:"", UID:"ee5bb778-32b6-45f7-a2da-cca7834e3b0b", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 44, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dfdb78649", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-c31d97b133", ContainerID:"07cca7eac9750828ba2b200d911ff08889482d9e1294dfda5f98fc8fdee14534", Pod:"calico-kube-controllers-7dfdb78649-8k6wh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali26f758b48c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:45:35.898002 containerd[1715]: 2024-09-04 17:45:35.851 [INFO][5429] k8s.go 608: Cleaning up netns ContainerID="1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" Sep 4 17:45:35.898002 containerd[1715]: 2024-09-04 17:45:35.851 [INFO][5429] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" iface="eth0" netns="" Sep 4 17:45:35.898002 containerd[1715]: 2024-09-04 17:45:35.851 [INFO][5429] k8s.go 615: Releasing IP address(es) ContainerID="1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" Sep 4 17:45:35.898002 containerd[1715]: 2024-09-04 17:45:35.851 [INFO][5429] utils.go 188: Calico CNI releasing IP address ContainerID="1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" Sep 4 17:45:35.898002 containerd[1715]: 2024-09-04 17:45:35.883 [INFO][5436] ipam_plugin.go 417: Releasing address using handleID ContainerID="1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" HandleID="k8s-pod-network.1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" Workload="ci--4054.1.0--a--c31d97b133-k8s-calico--kube--controllers--7dfdb78649--8k6wh-eth0" Sep 4 17:45:35.898002 containerd[1715]: 2024-09-04 17:45:35.883 [INFO][5436] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:45:35.898002 containerd[1715]: 2024-09-04 17:45:35.884 [INFO][5436] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:45:35.898002 containerd[1715]: 2024-09-04 17:45:35.892 [WARNING][5436] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" HandleID="k8s-pod-network.1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" Workload="ci--4054.1.0--a--c31d97b133-k8s-calico--kube--controllers--7dfdb78649--8k6wh-eth0" Sep 4 17:45:35.898002 containerd[1715]: 2024-09-04 17:45:35.893 [INFO][5436] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" HandleID="k8s-pod-network.1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" Workload="ci--4054.1.0--a--c31d97b133-k8s-calico--kube--controllers--7dfdb78649--8k6wh-eth0" Sep 4 17:45:35.898002 containerd[1715]: 2024-09-04 17:45:35.894 [INFO][5436] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:45:35.898002 containerd[1715]: 2024-09-04 17:45:35.896 [INFO][5429] k8s.go 621: Teardown processing complete. ContainerID="1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266" Sep 4 17:45:35.899150 containerd[1715]: time="2024-09-04T17:45:35.898211311Z" level=info msg="TearDown network for sandbox \"1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266\" successfully" Sep 4 17:45:35.905764 containerd[1715]: time="2024-09-04T17:45:35.905368682Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:45:35.905764 containerd[1715]: time="2024-09-04T17:45:35.905437484Z" level=info msg="RemovePodSandbox \"1c9d0faffa79425c9a44889accc3a6eed49221b1335dcbb90ead8bb94ab8e266\" returns successfully" Sep 4 17:45:35.906836 containerd[1715]: time="2024-09-04T17:45:35.906808817Z" level=info msg="StopPodSandbox for \"0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e\"" Sep 4 17:45:36.012164 containerd[1715]: 2024-09-04 17:45:35.958 [WARNING][5455] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--mngb6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a3dec762-fd79-4f26-8e9e-de588a8c2d0d", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 44, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-c31d97b133", ContainerID:"93c4b42f43b3c5c1a2dc510b3277824987e0d747d022b175cb44337eadaa1880", Pod:"coredns-76f75df574-mngb6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b5f06870bf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:45:36.012164 containerd[1715]: 2024-09-04 17:45:35.958 [INFO][5455] k8s.go 608: Cleaning up netns ContainerID="0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" Sep 4 17:45:36.012164 containerd[1715]: 2024-09-04 17:45:35.958 [INFO][5455] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" iface="eth0" netns="" Sep 4 17:45:36.012164 containerd[1715]: 2024-09-04 17:45:35.958 [INFO][5455] k8s.go 615: Releasing IP address(es) ContainerID="0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" Sep 4 17:45:36.012164 containerd[1715]: 2024-09-04 17:45:35.958 [INFO][5455] utils.go 188: Calico CNI releasing IP address ContainerID="0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" Sep 4 17:45:36.012164 containerd[1715]: 2024-09-04 17:45:35.993 [INFO][5461] ipam_plugin.go 417: Releasing address using handleID ContainerID="0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" HandleID="k8s-pod-network.0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" Workload="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--mngb6-eth0" Sep 4 17:45:36.012164 containerd[1715]: 2024-09-04 17:45:35.993 [INFO][5461] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:45:36.012164 containerd[1715]: 2024-09-04 17:45:35.993 [INFO][5461] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:45:36.012164 containerd[1715]: 2024-09-04 17:45:36.006 [WARNING][5461] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" HandleID="k8s-pod-network.0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" Workload="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--mngb6-eth0" Sep 4 17:45:36.012164 containerd[1715]: 2024-09-04 17:45:36.006 [INFO][5461] ipam_plugin.go 445: Releasing address using workloadID ContainerID="0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" HandleID="k8s-pod-network.0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" Workload="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--mngb6-eth0" Sep 4 17:45:36.012164 containerd[1715]: 2024-09-04 17:45:36.007 [INFO][5461] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:45:36.012164 containerd[1715]: 2024-09-04 17:45:36.010 [INFO][5455] k8s.go 621: Teardown processing complete. ContainerID="0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" Sep 4 17:45:36.012164 containerd[1715]: time="2024-09-04T17:45:36.012076834Z" level=info msg="TearDown network for sandbox \"0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e\" successfully" Sep 4 17:45:36.012164 containerd[1715]: time="2024-09-04T17:45:36.012108035Z" level=info msg="StopPodSandbox for \"0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e\" returns successfully" Sep 4 17:45:36.013832 containerd[1715]: time="2024-09-04T17:45:36.013519969Z" level=info msg="RemovePodSandbox for \"0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e\"" Sep 4 17:45:36.013832 containerd[1715]: time="2024-09-04T17:45:36.013563770Z" level=info msg="Forcibly stopping sandbox \"0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e\"" Sep 4 17:45:36.128433 containerd[1715]: 2024-09-04 17:45:36.079 [WARNING][5480] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--mngb6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a3dec762-fd79-4f26-8e9e-de588a8c2d0d", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 44, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-c31d97b133", ContainerID:"93c4b42f43b3c5c1a2dc510b3277824987e0d747d022b175cb44337eadaa1880", Pod:"coredns-76f75df574-mngb6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b5f06870bf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:45:36.128433 containerd[1715]: 2024-09-04 17:45:36.079 [INFO][5480] k8s.go 608: Cleaning up netns ContainerID="0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" Sep 4 17:45:36.128433 containerd[1715]: 2024-09-04 17:45:36.080 [INFO][5480] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" iface="eth0" netns="" Sep 4 17:45:36.128433 containerd[1715]: 2024-09-04 17:45:36.081 [INFO][5480] k8s.go 615: Releasing IP address(es) ContainerID="0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" Sep 4 17:45:36.128433 containerd[1715]: 2024-09-04 17:45:36.081 [INFO][5480] utils.go 188: Calico CNI releasing IP address ContainerID="0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" Sep 4 17:45:36.128433 containerd[1715]: 2024-09-04 17:45:36.113 [INFO][5486] ipam_plugin.go 417: Releasing address using handleID ContainerID="0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" HandleID="k8s-pod-network.0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" Workload="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--mngb6-eth0" Sep 4 17:45:36.128433 containerd[1715]: 2024-09-04 17:45:36.113 [INFO][5486] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:45:36.128433 containerd[1715]: 2024-09-04 17:45:36.113 [INFO][5486] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:45:36.128433 containerd[1715]: 2024-09-04 17:45:36.121 [WARNING][5486] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" HandleID="k8s-pod-network.0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" Workload="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--mngb6-eth0" Sep 4 17:45:36.128433 containerd[1715]: 2024-09-04 17:45:36.121 [INFO][5486] ipam_plugin.go 445: Releasing address using workloadID ContainerID="0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" HandleID="k8s-pod-network.0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" Workload="ci--4054.1.0--a--c31d97b133-k8s-coredns--76f75df574--mngb6-eth0" Sep 4 17:45:36.128433 containerd[1715]: 2024-09-04 17:45:36.124 [INFO][5486] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:45:36.128433 containerd[1715]: 2024-09-04 17:45:36.126 [INFO][5480] k8s.go 621: Teardown processing complete. ContainerID="0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e" Sep 4 17:45:36.129120 containerd[1715]: time="2024-09-04T17:45:36.128480718Z" level=info msg="TearDown network for sandbox \"0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e\" successfully" Sep 4 17:45:36.142905 containerd[1715]: time="2024-09-04T17:45:36.142862862Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:45:36.143255 containerd[1715]: time="2024-09-04T17:45:36.143230071Z" level=info msg="RemovePodSandbox \"0301cf7d3f5e65f7741fd8253b126f05216443d852c6bb181e86484b44efab6e\" returns successfully" Sep 4 17:45:36.479489 containerd[1715]: time="2024-09-04T17:45:36.479441411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:45:36.482293 containerd[1715]: time="2024-09-04T17:45:36.482191877Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Sep 4 17:45:36.485961 containerd[1715]: time="2024-09-04T17:45:36.485894266Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:45:36.491067 containerd[1715]: time="2024-09-04T17:45:36.490969887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:45:36.492163 containerd[1715]: time="2024-09-04T17:45:36.491507400Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 3.316622826s" Sep 4 17:45:36.492163 containerd[1715]: time="2024-09-04T17:45:36.491543201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 4 17:45:36.492872 containerd[1715]: time="2024-09-04T17:45:36.492843732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:45:36.494067 containerd[1715]: time="2024-09-04T17:45:36.493868656Z" level=info msg="CreateContainer within sandbox \"3bc24cf83d243d56c60bbe6e46eef81439bd2e5589fae1e0568f67b376668764\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:45:36.530622 containerd[1715]: time="2024-09-04T17:45:36.530498632Z" level=info msg="CreateContainer within sandbox \"3bc24cf83d243d56c60bbe6e46eef81439bd2e5589fae1e0568f67b376668764\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4fb152e79d1fc1c5092505eb968911241691bbe7e65d828c40c4c7e76317ecdc\"" Sep 4 17:45:36.531447 containerd[1715]: time="2024-09-04T17:45:36.531386854Z" level=info msg="StartContainer for \"4fb152e79d1fc1c5092505eb968911241691bbe7e65d828c40c4c7e76317ecdc\"" Sep 4 17:45:36.569469 systemd[1]: run-containerd-runc-k8s.io-4fb152e79d1fc1c5092505eb968911241691bbe7e65d828c40c4c7e76317ecdc-runc.lp9EUi.mount: Deactivated successfully. Sep 4 17:45:36.577253 systemd[1]: Started cri-containerd-4fb152e79d1fc1c5092505eb968911241691bbe7e65d828c40c4c7e76317ecdc.scope - libcontainer container 4fb152e79d1fc1c5092505eb968911241691bbe7e65d828c40c4c7e76317ecdc. Sep 4 17:45:36.620352 containerd[1715]: time="2024-09-04T17:45:36.620215578Z" level=info msg="StartContainer for \"4fb152e79d1fc1c5092505eb968911241691bbe7e65d828c40c4c7e76317ecdc\" returns successfully" Sep 4 17:45:36.825920 containerd[1715]: time="2024-09-04T17:45:36.825687592Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:45:36.828778 containerd[1715]: time="2024-09-04T17:45:36.828714264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=77" Sep 4 17:45:36.831787 containerd[1715]: time="2024-09-04T17:45:36.831477230Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 337.564973ms" Sep 4 17:45:36.831787 containerd[1715]: time="2024-09-04T17:45:36.831519631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 4 17:45:36.834989 containerd[1715]: time="2024-09-04T17:45:36.834543704Z" level=info msg="CreateContainer within sandbox \"7e02a5afc3b33f517709f658ea44f27fef345e458ba9962453dcb29d4095647a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:45:36.865833 containerd[1715]: time="2024-09-04T17:45:36.865786051Z" level=info msg="CreateContainer within sandbox \"7e02a5afc3b33f517709f658ea44f27fef345e458ba9962453dcb29d4095647a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"22742e0ccedc610897a6e8dff7f4d96bc22108c077422395eed9f132524940b5\"" Sep 4 17:45:36.869483 containerd[1715]: time="2024-09-04T17:45:36.868584618Z" level=info msg="StartContainer for \"22742e0ccedc610897a6e8dff7f4d96bc22108c077422395eed9f132524940b5\"" Sep 4 17:45:36.907339 systemd[1]: Started cri-containerd-22742e0ccedc610897a6e8dff7f4d96bc22108c077422395eed9f132524940b5.scope - libcontainer container 22742e0ccedc610897a6e8dff7f4d96bc22108c077422395eed9f132524940b5. Sep 4 17:45:36.997473 containerd[1715]: time="2024-09-04T17:45:36.997417799Z" level=info msg="StartContainer for \"22742e0ccedc610897a6e8dff7f4d96bc22108c077422395eed9f132524940b5\" returns successfully" Sep 4 17:45:37.494113 kubelet[3205]: I0904 17:45:37.494071 3205 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-78f648d64b-ps4g4" podStartSLOduration=2.176365824 podStartE2EDuration="5.494002475s" podCreationTimestamp="2024-09-04 17:45:32 +0000 UTC" firstStartedPulling="2024-09-04 17:45:33.174215157 +0000 UTC m=+58.618381320" lastFinishedPulling="2024-09-04 17:45:36.491851808 +0000 UTC m=+61.936017971" observedRunningTime="2024-09-04 17:45:37.474276503 +0000 UTC m=+62.918442666" watchObservedRunningTime="2024-09-04 17:45:37.494002475 +0000 UTC m=+62.938168638" Sep 4 17:45:38.465288 kubelet[3205]: I0904 17:45:38.465245 3205 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:45:38.465975 kubelet[3205]: I0904 17:45:38.465245 3205 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:46:12.574622 kubelet[3205]: I0904 17:46:12.574100 3205 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:46:12.590642 kubelet[3205]: I0904 17:46:12.590604 3205 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-78f648d64b-jtm9k" podStartSLOduration=36.961766458 podStartE2EDuration="40.590561762s" podCreationTimestamp="2024-09-04 17:45:32 +0000 UTC" firstStartedPulling="2024-09-04 17:45:33.203051135 +0000 UTC m=+58.647217298" lastFinishedPulling="2024-09-04 17:45:36.831846339 +0000 UTC m=+62.276012602" observedRunningTime="2024-09-04 17:45:37.496394132 +0000 UTC m=+62.940560295" watchObservedRunningTime="2024-09-04 17:46:12.590561762 +0000 UTC m=+98.034727925" Sep 4 17:46:25.118509 kubelet[3205]: I0904 17:46:25.118312 3205 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:46:27.099218 systemd[1]: run-containerd-runc-k8s.io-bbfaebe09bb0b289d258e2aa0a6d5939a8c55190951e0ab8b6a30aba84c10060-runc.QoZeBf.mount: Deactivated successfully. Sep 4 17:47:06.589084 systemd[1]: Started sshd@7-10.200.4.10:22-10.200.16.10:54588.service - OpenSSH per-connection server daemon (10.200.16.10:54588). Sep 4 17:47:07.176123 sshd[5819]: Accepted publickey for core from 10.200.16.10 port 54588 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:47:07.178136 sshd[5819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:47:07.184263 systemd-logind[1689]: New session 10 of user core. Sep 4 17:47:07.190133 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:47:07.672678 sshd[5819]: pam_unix(sshd:session): session closed for user core Sep 4 17:47:07.676114 systemd[1]: sshd@7-10.200.4.10:22-10.200.16.10:54588.service: Deactivated successfully. Sep 4 17:47:07.678861 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:47:07.680365 systemd-logind[1689]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:47:07.681459 systemd-logind[1689]: Removed session 10. Sep 4 17:47:12.783252 systemd[1]: Started sshd@8-10.200.4.10:22-10.200.16.10:59068.service - OpenSSH per-connection server daemon (10.200.16.10:59068). Sep 4 17:47:13.361755 sshd[5838]: Accepted publickey for core from 10.200.16.10 port 59068 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:47:13.363287 sshd[5838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:47:13.368004 systemd-logind[1689]: New session 11 of user core. Sep 4 17:47:13.374102 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:47:13.854561 sshd[5838]: pam_unix(sshd:session): session closed for user core Sep 4 17:47:13.857617 systemd[1]: sshd@8-10.200.4.10:22-10.200.16.10:59068.service: Deactivated successfully. Sep 4 17:47:13.859874 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:47:13.861626 systemd-logind[1689]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:47:13.862772 systemd-logind[1689]: Removed session 11. Sep 4 17:47:18.966080 systemd[1]: Started sshd@9-10.200.4.10:22-10.200.16.10:36362.service - OpenSSH per-connection server daemon (10.200.16.10:36362). Sep 4 17:47:19.556103 sshd[5854]: Accepted publickey for core from 10.200.16.10 port 36362 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:47:19.558008 sshd[5854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:47:19.561996 systemd-logind[1689]: New session 12 of user core. Sep 4 17:47:19.566089 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:47:20.040510 sshd[5854]: pam_unix(sshd:session): session closed for user core Sep 4 17:47:20.043627 systemd[1]: sshd@9-10.200.4.10:22-10.200.16.10:36362.service: Deactivated successfully. Sep 4 17:47:20.045884 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:47:20.047498 systemd-logind[1689]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:47:20.048758 systemd-logind[1689]: Removed session 12. Sep 4 17:47:20.154473 systemd[1]: Started sshd@10-10.200.4.10:22-10.200.16.10:36370.service - OpenSSH per-connection server daemon (10.200.16.10:36370). Sep 4 17:47:20.736981 sshd[5868]: Accepted publickey for core from 10.200.16.10 port 36370 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:47:20.738719 sshd[5868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:47:20.743806 systemd-logind[1689]: New session 13 of user core. Sep 4 17:47:20.748115 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:47:21.259385 sshd[5868]: pam_unix(sshd:session): session closed for user core Sep 4 17:47:21.262366 systemd[1]: sshd@10-10.200.4.10:22-10.200.16.10:36370.service: Deactivated successfully. Sep 4 17:47:21.264634 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:47:21.266230 systemd-logind[1689]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:47:21.267435 systemd-logind[1689]: Removed session 13. Sep 4 17:47:21.372369 systemd[1]: Started sshd@11-10.200.4.10:22-10.200.16.10:36378.service - OpenSSH per-connection server daemon (10.200.16.10:36378). Sep 4 17:47:21.959468 sshd[5879]: Accepted publickey for core from 10.200.16.10 port 36378 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:47:21.960976 sshd[5879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:47:21.965598 systemd-logind[1689]: New session 14 of user core. Sep 4 17:47:21.971118 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:47:22.450847 sshd[5879]: pam_unix(sshd:session): session closed for user core Sep 4 17:47:22.454081 systemd[1]: sshd@11-10.200.4.10:22-10.200.16.10:36378.service: Deactivated successfully. Sep 4 17:47:22.456290 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:47:22.457997 systemd-logind[1689]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:47:22.459041 systemd-logind[1689]: Removed session 14. Sep 4 17:47:27.561330 systemd[1]: Started sshd@12-10.200.4.10:22-10.200.16.10:36380.service - OpenSSH per-connection server daemon (10.200.16.10:36380). Sep 4 17:47:28.176521 sshd[5957]: Accepted publickey for core from 10.200.16.10 port 36380 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:47:28.178346 sshd[5957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:47:28.182182 systemd-logind[1689]: New session 15 of user core. Sep 4 17:47:28.187095 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:47:28.655688 sshd[5957]: pam_unix(sshd:session): session closed for user core Sep 4 17:47:28.660708 systemd-logind[1689]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:47:28.661892 systemd[1]: sshd@12-10.200.4.10:22-10.200.16.10:36380.service: Deactivated successfully. Sep 4 17:47:28.664847 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:47:28.665869 systemd-logind[1689]: Removed session 15. Sep 4 17:47:33.758215 systemd[1]: Started sshd@13-10.200.4.10:22-10.200.16.10:34846.service - OpenSSH per-connection server daemon (10.200.16.10:34846). Sep 4 17:47:34.368374 sshd[5979]: Accepted publickey for core from 10.200.16.10 port 34846 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:47:34.370237 sshd[5979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:47:34.375347 systemd-logind[1689]: New session 16 of user core. Sep 4 17:47:34.382083 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:47:34.854373 sshd[5979]: pam_unix(sshd:session): session closed for user core Sep 4 17:47:34.858499 systemd[1]: sshd@13-10.200.4.10:22-10.200.16.10:34846.service: Deactivated successfully. Sep 4 17:47:34.861132 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:47:34.862231 systemd-logind[1689]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:47:34.863285 systemd-logind[1689]: Removed session 16. Sep 4 17:47:39.962361 systemd[1]: Started sshd@14-10.200.4.10:22-10.200.16.10:43530.service - OpenSSH per-connection server daemon (10.200.16.10:43530). Sep 4 17:47:40.545956 sshd[5993]: Accepted publickey for core from 10.200.16.10 port 43530 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:47:40.547503 sshd[5993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:47:40.552249 systemd-logind[1689]: New session 17 of user core. Sep 4 17:47:40.556104 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:47:41.018890 sshd[5993]: pam_unix(sshd:session): session closed for user core Sep 4 17:47:41.022761 systemd[1]: sshd@14-10.200.4.10:22-10.200.16.10:43530.service: Deactivated successfully. Sep 4 17:47:41.025207 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:47:41.026993 systemd-logind[1689]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:47:41.028377 systemd-logind[1689]: Removed session 17. Sep 4 17:47:41.133271 systemd[1]: Started sshd@15-10.200.4.10:22-10.200.16.10:43538.service - OpenSSH per-connection server daemon (10.200.16.10:43538). Sep 4 17:47:41.723704 sshd[6006]: Accepted publickey for core from 10.200.16.10 port 43538 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:47:41.725332 sshd[6006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:47:41.730769 systemd-logind[1689]: New session 18 of user core. Sep 4 17:47:41.736088 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:47:42.230664 sshd[6006]: pam_unix(sshd:session): session closed for user core Sep 4 17:47:42.234611 systemd[1]: sshd@15-10.200.4.10:22-10.200.16.10:43538.service: Deactivated successfully. Sep 4 17:47:42.236794 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:47:42.238098 systemd-logind[1689]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:47:42.239202 systemd-logind[1689]: Removed session 18. Sep 4 17:47:42.340271 systemd[1]: Started sshd@16-10.200.4.10:22-10.200.16.10:43554.service - OpenSSH per-connection server daemon (10.200.16.10:43554). Sep 4 17:47:42.930511 sshd[6017]: Accepted publickey for core from 10.200.16.10 port 43554 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:47:42.932079 sshd[6017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:47:42.936744 systemd-logind[1689]: New session 19 of user core. Sep 4 17:47:42.945092 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:47:45.139395 sshd[6017]: pam_unix(sshd:session): session closed for user core Sep 4 17:47:45.142782 systemd[1]: sshd@16-10.200.4.10:22-10.200.16.10:43554.service: Deactivated successfully. Sep 4 17:47:45.145088 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:47:45.146700 systemd-logind[1689]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:47:45.148218 systemd-logind[1689]: Removed session 19. Sep 4 17:47:45.246247 systemd[1]: Started sshd@17-10.200.4.10:22-10.200.16.10:43570.service - OpenSSH per-connection server daemon (10.200.16.10:43570). Sep 4 17:47:45.838582 sshd[6040]: Accepted publickey for core from 10.200.16.10 port 43570 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:47:45.840147 sshd[6040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:47:45.844746 systemd-logind[1689]: New session 20 of user core. Sep 4 17:47:45.849124 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:47:46.426733 sshd[6040]: pam_unix(sshd:session): session closed for user core Sep 4 17:47:46.430176 systemd[1]: sshd@17-10.200.4.10:22-10.200.16.10:43570.service: Deactivated successfully. Sep 4 17:47:46.432470 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:47:46.434408 systemd-logind[1689]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:47:46.435557 systemd-logind[1689]: Removed session 20. Sep 4 17:47:46.537261 systemd[1]: Started sshd@18-10.200.4.10:22-10.200.16.10:43584.service - OpenSSH per-connection server daemon (10.200.16.10:43584). Sep 4 17:47:47.121664 sshd[6050]: Accepted publickey for core from 10.200.16.10 port 43584 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:47:47.123313 sshd[6050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:47:47.128156 systemd-logind[1689]: New session 21 of user core. Sep 4 17:47:47.134098 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:47:47.608249 sshd[6050]: pam_unix(sshd:session): session closed for user core Sep 4 17:47:47.611342 systemd[1]: sshd@18-10.200.4.10:22-10.200.16.10:43584.service: Deactivated successfully. Sep 4 17:47:47.613663 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:47:47.615582 systemd-logind[1689]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:47:47.616680 systemd-logind[1689]: Removed session 21. Sep 4 17:47:52.719257 systemd[1]: Started sshd@19-10.200.4.10:22-10.200.16.10:58866.service - OpenSSH per-connection server daemon (10.200.16.10:58866). Sep 4 17:47:53.304959 sshd[6084]: Accepted publickey for core from 10.200.16.10 port 58866 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:47:53.306731 sshd[6084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:47:53.311848 systemd-logind[1689]: New session 22 of user core. Sep 4 17:47:53.317114 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:47:53.790364 sshd[6084]: pam_unix(sshd:session): session closed for user core Sep 4 17:47:53.795111 systemd[1]: sshd@19-10.200.4.10:22-10.200.16.10:58866.service: Deactivated successfully. Sep 4 17:47:53.797439 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:47:53.798534 systemd-logind[1689]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:47:53.799589 systemd-logind[1689]: Removed session 22. Sep 4 17:47:58.903640 systemd[1]: Started sshd@20-10.200.4.10:22-10.200.16.10:36160.service - OpenSSH per-connection server daemon (10.200.16.10:36160). Sep 4 17:47:59.514381 sshd[6133]: Accepted publickey for core from 10.200.16.10 port 36160 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:47:59.516209 sshd[6133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:47:59.520923 systemd-logind[1689]: New session 23 of user core. Sep 4 17:47:59.525104 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:48:00.036050 sshd[6133]: pam_unix(sshd:session): session closed for user core Sep 4 17:48:00.040142 systemd[1]: sshd@20-10.200.4.10:22-10.200.16.10:36160.service: Deactivated successfully. Sep 4 17:48:00.042304 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:48:00.043173 systemd-logind[1689]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:48:00.044260 systemd-logind[1689]: Removed session 23. Sep 4 17:48:05.136216 systemd[1]: Started sshd@21-10.200.4.10:22-10.200.16.10:36172.service - OpenSSH per-connection server daemon (10.200.16.10:36172). Sep 4 17:48:05.716628 sshd[6151]: Accepted publickey for core from 10.200.16.10 port 36172 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:48:05.718448 sshd[6151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:48:05.723302 systemd-logind[1689]: New session 24 of user core. Sep 4 17:48:05.730100 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:48:06.196840 sshd[6151]: pam_unix(sshd:session): session closed for user core Sep 4 17:48:06.200333 systemd[1]: sshd@21-10.200.4.10:22-10.200.16.10:36172.service: Deactivated successfully. Sep 4 17:48:06.202578 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:48:06.204444 systemd-logind[1689]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:48:06.205368 systemd-logind[1689]: Removed session 24. Sep 4 17:48:11.307254 systemd[1]: Started sshd@22-10.200.4.10:22-10.200.16.10:43644.service - OpenSSH per-connection server daemon (10.200.16.10:43644). Sep 4 17:48:11.888263 sshd[6165]: Accepted publickey for core from 10.200.16.10 port 43644 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:48:11.889783 sshd[6165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:48:11.894565 systemd-logind[1689]: New session 25 of user core. Sep 4 17:48:11.903090 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:48:12.363536 sshd[6165]: pam_unix(sshd:session): session closed for user core Sep 4 17:48:12.366626 systemd[1]: sshd@22-10.200.4.10:22-10.200.16.10:43644.service: Deactivated successfully. Sep 4 17:48:12.368895 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:48:12.370518 systemd-logind[1689]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:48:12.372193 systemd-logind[1689]: Removed session 25. Sep 4 17:48:17.466893 systemd[1]: Started sshd@23-10.200.4.10:22-10.200.16.10:43654.service - OpenSSH per-connection server daemon (10.200.16.10:43654). Sep 4 17:48:18.106545 sshd[6186]: Accepted publickey for core from 10.200.16.10 port 43654 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:48:18.108355 sshd[6186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:48:18.118159 systemd-logind[1689]: New session 26 of user core. Sep 4 17:48:18.120129 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 17:48:18.594334 sshd[6186]: pam_unix(sshd:session): session closed for user core Sep 4 17:48:18.597218 systemd[1]: sshd@23-10.200.4.10:22-10.200.16.10:43654.service: Deactivated successfully. Sep 4 17:48:18.599378 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 17:48:18.600928 systemd-logind[1689]: Session 26 logged out. Waiting for processes to exit. Sep 4 17:48:18.602316 systemd-logind[1689]: Removed session 26. Sep 4 17:48:23.701445 systemd[1]: Started sshd@24-10.200.4.10:22-10.200.16.10:43252.service - OpenSSH per-connection server daemon (10.200.16.10:43252). Sep 4 17:48:24.286994 sshd[6257]: Accepted publickey for core from 10.200.16.10 port 43252 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:48:24.288741 sshd[6257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:48:24.294234 systemd-logind[1689]: New session 27 of user core. Sep 4 17:48:24.297086 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 17:48:24.758308 sshd[6257]: pam_unix(sshd:session): session closed for user core Sep 4 17:48:24.762651 systemd[1]: sshd@24-10.200.4.10:22-10.200.16.10:43252.service: Deactivated successfully. Sep 4 17:48:24.764636 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 17:48:24.765494 systemd-logind[1689]: Session 27 logged out. Waiting for processes to exit. Sep 4 17:48:24.766646 systemd-logind[1689]: Removed session 27.